3-1 chapter 3: transport layer our goals: r understand principles behind transport layer services: m...

114
3-1 Chapter 3: Transport Layer Our goals: understand principles behind transport layer services: Multiplexing and demultiplexing reliable data transfer flow control congestion control learn about transport layer protocols in the Internet: UDP: connectionless transport TCP: connection- oriented transport TCP congestion control 1. Multiplexing is to support multiple flows 2. Network can damage pkt, lose pkt, duplicate pkt 3. One of my favorite layer!!

Upload: harvey-nicholson

Post on 12-Jan-2016

225 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-1

Chapter 3 Transport LayerOur goals understand

principles behind transport layer services Multiplexing and

demultiplexing reliable data

transfer flow control congestion

control

learn about transport layer protocols in the Internet UDP connectionless

transport TCP connection-

oriented transport TCP congestion control

1 Multiplexing is to support multiple flows

2 Network can damage pkt lose pkt duplicate pkt

3 One of my favorite layer

3-2

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-3

Transport services and protocols provide logical

communication between app processes running on different hosts

transport protocols run in end systems send side breaks app

messages into segments passes to network layer

rcv side reassembles segments into messages passes to app layer

more than one transport protocol available to apps Internet TCP and UDP

application

transportnetworkdata linkphysical

application

transportnetworkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysicalnetwork

data linkphysical

logical end-end transport

3-4

Transport vs network layer network layer

logical communication between hosts

transport layer logical communication between processes relies on enhances

network layer services

Household analogy12 kids sending

letters to 12 kids processes = kids app messages =

letters in envelopes hosts = houses transport protocol

= Ann and Bill network-layer

protocol = postal service

Another analogy1 Post office -gt network layer2 My wife -gt transport layer

3-5

Internet transport-layer protocols reliable in-order

delivery (TCP) congestion control

(distributed control) flow control connection setup

unreliable unordered delivery UDP no-frills extension of

ldquobest-effortrdquo IP

services not available delay guarantees bandwidth guarantees

application

transportnetworkdata linkphysical

application

transportnetworkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysicalnetwork

data linkphysical

logical end-end transport

Research issues

3-6

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-7

Multiplexingdemultiplexing

application

transport

network

link

physical

P1 application

transport

network

link

physical

application

transport

network

link

physical

P2P3 P4P1

host 1 host 2 host 3

= process= socket

delivering received segmentsto correct socket

Demultiplexing at rcv hostgathering data from multiplesockets enveloping data with header (later used for demultiplexing)

Multiplexing at send host

FTP telnet

3-8

How demultiplexing works host receives IP datagrams

each datagram has source IP address destination IP address

each datagram carries 1 transport-layer segment

each segment has source destination port number (recall well-known port numbers for specific applications)

host uses IP addresses amp port numbers to direct segment to appropriate socket

source port dest port

32 bits

applicationdata

(message)

other header fields

TCPUDP segment format

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 2: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-2

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-3

Transport services and protocols provide logical

communication between app processes running on different hosts

transport protocols run in end systems send side breaks app

messages into segments passes to network layer

rcv side reassembles segments into messages passes to app layer

more than one transport protocol available to apps Internet TCP and UDP

application

transportnetworkdata linkphysical

application

transportnetworkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysicalnetwork

data linkphysical

logical end-end transport

3-4

Transport vs network layer network layer

logical communication between hosts

transport layer logical communication between processes relies on enhances

network layer services

Household analogy12 kids sending

letters to 12 kids processes = kids app messages =

letters in envelopes hosts = houses transport protocol

= Ann and Bill network-layer

protocol = postal service

Another analogy1 Post office -gt network layer2 My wife -gt transport layer

3-5

Internet transport-layer protocols reliable in-order

delivery (TCP) congestion control

(distributed control) flow control connection setup

unreliable unordered delivery UDP no-frills extension of

ldquobest-effortrdquo IP

services not available delay guarantees bandwidth guarantees

application

transportnetworkdata linkphysical

application

transportnetworkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysicalnetwork

data linkphysical

logical end-end transport

Research issues

3-6

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-7

Multiplexingdemultiplexing

application

transport

network

link

physical

P1 application

transport

network

link

physical

application

transport

network

link

physical

P2P3 P4P1

host 1 host 2 host 3

= process= socket

delivering received segmentsto correct socket

Demultiplexing at rcv hostgathering data from multiplesockets enveloping data with header (later used for demultiplexing)

Multiplexing at send host

FTP telnet

3-8

How demultiplexing works host receives IP datagrams

each datagram has source IP address destination IP address

each datagram carries 1 transport-layer segment

each segment has source destination port number (recall well-known port numbers for specific applications)

host uses IP addresses amp port numbers to direct segment to appropriate socket

source port dest port

32 bits

applicationdata

(message)

other header fields

TCPUDP segment format

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 3: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-3

Transport services and protocols provide logical

communication between app processes running on different hosts

transport protocols run in end systems send side breaks app

messages into segments passes to network layer

rcv side reassembles segments into messages passes to app layer

more than one transport protocol available to apps Internet TCP and UDP

application

transportnetworkdata linkphysical

application

transportnetworkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysicalnetwork

data linkphysical

logical end-end transport

3-4

Transport vs network layer network layer

logical communication between hosts

transport layer logical communication between processes relies on enhances

network layer services

Household analogy12 kids sending

letters to 12 kids processes = kids app messages =

letters in envelopes hosts = houses transport protocol

= Ann and Bill network-layer

protocol = postal service

Another analogy1 Post office -gt network layer2 My wife -gt transport layer

3-5

Internet transport-layer protocols reliable in-order

delivery (TCP) congestion control

(distributed control) flow control connection setup

unreliable unordered delivery UDP no-frills extension of

ldquobest-effortrdquo IP

services not available delay guarantees bandwidth guarantees

application

transportnetworkdata linkphysical

application

transportnetworkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysicalnetwork

data linkphysical

logical end-end transport

Research issues

3-6

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-7

Multiplexingdemultiplexing

application

transport

network

link

physical

P1 application

transport

network

link

physical

application

transport

network

link

physical

P2P3 P4P1

host 1 host 2 host 3

= process= socket

delivering received segmentsto correct socket

Demultiplexing at rcv hostgathering data from multiplesockets enveloping data with header (later used for demultiplexing)

Multiplexing at send host

FTP telnet

3-8

How demultiplexing works host receives IP datagrams

each datagram has source IP address destination IP address

each datagram carries 1 transport-layer segment

each segment has source destination port number (recall well-known port numbers for specific applications)

host uses IP addresses amp port numbers to direct segment to appropriate socket

source port dest port

32 bits

applicationdata

(message)

other header fields

TCPUDP segment format

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 4: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-4

Transport vs network layer network layer

logical communication between hosts

transport layer logical communication between processes relies on enhances

network layer services

Household analogy12 kids sending

letters to 12 kids processes = kids app messages =

letters in envelopes hosts = houses transport protocol

= Ann and Bill network-layer

protocol = postal service

Another analogy1 Post office -gt network layer2 My wife -gt transport layer

3-5

Internet transport-layer protocols reliable in-order

delivery (TCP) congestion control

(distributed control) flow control connection setup

unreliable unordered delivery UDP no-frills extension of

ldquobest-effortrdquo IP

services not available delay guarantees bandwidth guarantees

application

transportnetworkdata linkphysical

application

transportnetworkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysicalnetwork

data linkphysical

logical end-end transport

Research issues

3-6

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-7

Multiplexingdemultiplexing

application

transport

network

link

physical

P1 application

transport

network

link

physical

application

transport

network

link

physical

P2P3 P4P1

host 1 host 2 host 3

= process= socket

delivering received segmentsto correct socket

Demultiplexing at rcv hostgathering data from multiplesockets enveloping data with header (later used for demultiplexing)

Multiplexing at send host

FTP telnet

3-8

How demultiplexing works host receives IP datagrams

each datagram has source IP address destination IP address

each datagram carries 1 transport-layer segment

each segment has source destination port number (recall well-known port numbers for specific applications)

host uses IP addresses amp port numbers to direct segment to appropriate socket

source port dest port

32 bits

applicationdata

(message)

other header fields

TCPUDP segment format

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 5: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-5

Internet transport-layer protocols reliable in-order

delivery (TCP) congestion control

(distributed control) flow control connection setup

unreliable unordered delivery UDP no-frills extension of

ldquobest-effortrdquo IP

services not available delay guarantees bandwidth guarantees

application

transportnetworkdata linkphysical

application

transportnetworkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysical

networkdata linkphysicalnetwork

data linkphysical

logical end-end transport

Research issues

3-6

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-7

Multiplexingdemultiplexing

application

transport

network

link

physical

P1 application

transport

network

link

physical

application

transport

network

link

physical

P2P3 P4P1

host 1 host 2 host 3

= process= socket

delivering received segmentsto correct socket

Demultiplexing at rcv hostgathering data from multiplesockets enveloping data with header (later used for demultiplexing)

Multiplexing at send host

FTP telnet

3-8

How demultiplexing works host receives IP datagrams

each datagram has source IP address destination IP address

each datagram carries 1 transport-layer segment

each segment has source destination port number (recall well-known port numbers for specific applications)

host uses IP addresses amp port numbers to direct segment to appropriate socket

source port dest port

32 bits

applicationdata

(message)

other header fields

TCPUDP segment format

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 6: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-6

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-7

Multiplexingdemultiplexing

application

transport

network

link

physical

P1 application

transport

network

link

physical

application

transport

network

link

physical

P2P3 P4P1

host 1 host 2 host 3

= process= socket

delivering received segmentsto correct socket

Demultiplexing at rcv hostgathering data from multiplesockets enveloping data with header (later used for demultiplexing)

Multiplexing at send host

FTP telnet

3-8

How demultiplexing works host receives IP datagrams

each datagram has source IP address destination IP address

each datagram carries 1 transport-layer segment

each segment has source destination port number (recall well-known port numbers for specific applications)

host uses IP addresses amp port numbers to direct segment to appropriate socket

source port dest port

32 bits

applicationdata

(message)

other header fields

TCPUDP segment format

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 7: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-7

Multiplexingdemultiplexing

application

transport

network

link

physical

P1 application

transport

network

link

physical

application

transport

network

link

physical

P2P3 P4P1

host 1 host 2 host 3

= process= socket

delivering received segmentsto correct socket

Demultiplexing at rcv hostgathering data from multiplesockets enveloping data with header (later used for demultiplexing)

Multiplexing at send host

FTP telnet

3-8

How demultiplexing works host receives IP datagrams

each datagram has source IP address destination IP address

each datagram carries 1 transport-layer segment

each segment has source destination port number (recall well-known port numbers for specific applications)

host uses IP addresses amp port numbers to direct segment to appropriate socket

source port dest port

32 bits

applicationdata

(message)

other header fields

TCPUDP segment format

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 8: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-8

How demultiplexing works host receives IP datagrams

each datagram has source IP address destination IP address

each datagram carries 1 transport-layer segment

each segment has source destination port number (recall well-known port numbers for specific applications)

host uses IP addresses amp port numbers to direct segment to appropriate socket

source port dest port

32 bits

applicationdata

(message)

other header fields

TCPUDP segment format

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 9: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-9

Connectionless demultiplexing

Create sockets with port numbers

DatagramSocket mySocket1 = new DatagramSocket(99111)

DatagramSocket mySocket2 = new DatagramSocket(99222)

UDP socket identified by two-tuple

(dest IP address dest port number)

When host receives UDP segment checks destination port

number in segment directs UDP segment to

socket with that port number

IP datagrams with different source IP addresses andor source port numbers directed to same socket (this is how a system can serve multiple requests)

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 10: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-10

Connectionless demux (cont)

DatagramSocket serverSocket = new DatagramSocket(6428)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 6428

DP 9157

SP 9157

DP 6428

SP 6428

DP 5775

SP 5775

DP 6428

SP provides ldquoreturn addressrdquo

Source IP and port can be spoofed

Based on destination IP and

port

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 11: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-11

Connection-oriented demux

TCP socket identified by 4-tuple source IP address source port number dest IP address dest port number

recv host uses all four values to direct segment to appropriate socket

Server host may support many simultaneous TCP sockets each socket identified

by its own 4-tuple

Web servers have different sockets for each connecting client non-persistent HTTP

will have different socket for each request

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 12: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-12

Connection-oriented demux (cont)

ClientIPB

P3

client IP A

P1P1P3

serverIP C

SP 80

DP 9157

SP 9157

DP 80

SP 80

DP 5775

SP 5775

DP 80

P4

(S-IPSP D-IP DP)

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 13: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-13

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 14: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-14

UDP User Datagram Protocol [RFC 768]

ldquono frillsrdquo ldquobare bonesrdquo Internet transport protocol

ldquobest effortrdquo service UDP segments may be lost delivered out of order

to app connectionless

no handshaking between UDP sender receiver

each UDP segment handled independently of others

Why is there a UDP no connection

establishment (which can add delay)

simple no connection state at sender receiver

small segment header no congestion control

UDP can blast away as fast as desired

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 15: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-15

UDP more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses DNS SNMP

reliable transfer over UDP add reliability at application layer application-specific

error recovery (eg FTP based on UDP but with recovery)

source port dest port

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength in

bytes of UDPsegmentincluding

headerWhen the network is stressed you PRAY

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 16: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-16

UDP checksum

Sender treat segment

contents as sequence of 16-bit integers

checksum addition (1rsquos complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver compute checksum of

received segment check if computed

checksum equals checksum field value NO - error detected YES - no error detected

But maybe errors nonetheless More later hellip

Goal detect ldquoerrorsrdquo (eg flipped bits) in transmitted segment

eg 1+2+3 = 6 So is 0+3+3=6

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 17: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-17

Internet Checksum Example Note

When adding numbers a carryout from the most significant bit needs to be added to the result

Example add two 16-bit integers

1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 01 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 01 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

wraparound

sumchecksum

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 18: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-18

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 19: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-19

Principles of Reliable data transfer important in app transport link layers top-10 list of important networking topics

characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)

abstractionThis picture sets the scenario

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 20: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-20

Reliable data transfer getting started

sendside

receiveside

rdt_send() called from above (eg by app)

Passed data to deliver to receiver upper

layer

udt_send() called by rdtto transfer packet over

unreliable channel to receiver

rdt_rcv() called when packet arrives on rcv-side of

channel

deliver_data() called by rdt to deliver data

to upper

Let us now look at the gut of these modules Any question

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 21: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-21

Reliable data transfer getting startedWersquoll incrementally develop sender receiver

sides of reliable data transfer protocol (rdt)

consider only unidirectional data transfer but control info will flow on both directions

use finite state machines (FSM) to specify sender receiver

state1

state2

event causing state transitionactions taken on state transition

state when in this ldquostaterdquo next

state uniquely determined by

next eventeventactions

(DONrsquoT FALL ASLEEP)

Event timer receives message hellipetcAction executes a program send message hellipetc

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 22: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-22

Rdt10 reliable transfer over a reliable channel

underlying channel perfectly reliable no bit errors no loss of packets

separate FSMs for sender receiver sender sends data into underlying channel receiver reads data from underlying channel

Wait for call from

above packet = make_pkt(data)udt_send(packet)

rdt_send(data)

extract (packetdata)deliver_data(data)

Wait for call from

below

rdt_rcv(packet)

sender receiver

In reality this is an unrealistic assumption but

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 23: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-23

Rdt20 channel with bit errors

underlying channel may flip bits in packet recall UDP checksum to detect bit errors

the question how to recover from errors acknowledgements (ACKs) receiver explicitly tells

sender that pkt received OK negative acknowledgements (NAKs) receiver

explicitly tells sender that pkt had errors sender retransmits pkt on receipt of NAK human scenarios using ACKs NAKs

new mechanisms in rdt20 (beyond rdt10) error detection receiver feedback control msgs (ACKNAK) rcvr-

gtsender

Ack I love u I love u 2Nak I love u I donrsquot love u

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 24: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-24

rdt20 FSM specification

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

belowsender

receiverrdt_send(data)

Buffer is needed to store data from application layer or to block call

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 25: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-25

rdt20 operation with no errors

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 26: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-26

rdt20 error scenario

Wait for call from above

snkpkt = make_pkt(data checksum)udt_send(sndpkt)

extract(rcvpktdata)deliver_data(data)udt_send(ACK)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

rdt_rcv(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp isNAK(rcvpkt)

udt_send(NAK)

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Wait for ACK or

NAK

Wait for call from

below

rdt_send(data)

GOT IT

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 27: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-27

rdt20 has a fatal flaw

What happens if ACKNAK corrupted

sender doesnrsquot know what happened at receiver

canrsquot just retransmit possible duplicate

What to do sender ACKsNAKs

receiverrsquos ACKNAK What if sender ACKNAK lost

retransmit but this might cause retransmission of correctly received pkt

Handling duplicates sender adds sequence

number to each pkt sender retransmits

current pkt if ACKNAK garbled

receiver discards (doesnrsquot deliver up) duplicate pkt

Sender sends one packet then waits for receiver response

stop and wait protocol

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 28: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-28

rdt21 sender handles garbled ACKNAKs

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

Wait for ACK or NAK 0 udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isNAK(rcvpkt) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt)

Wait for call 1 from

above

Wait for ACK or NAK 1

THE FSM GETS MESSY

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 29: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-29

rdt21 receiver handles garbled ACKNAKs

Wait for 0 from below

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq0(rcvpkt)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

Wait for 1 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq0(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp not corrupt(rcvpkt) ampamp has_seq1(rcvpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt)

sndpkt = make_pkt(ACK chksum)udt_send(sndpkt)

sndpkt = make_pkt(NAK chksum)udt_send(sndpkt)

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 30: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-30

rdt21 discussion

Sender seq added to pkt two seq rsquos (01)

will suffice Why must check if

received ACKNAK corrupted

twice as many states state must

ldquorememberrdquo whether ldquocurrentrdquo pkt has 0 or 1 seq

Receiver must check if

received packet is duplicate state indicates

whether 0 or 1 is expected pkt seq

note receiver can not know if its last ACKNAK received OK at sender

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 31: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-31

rdt22 a NAK-free protocol

same functionality as rdt21 using ACKs only

instead of NAK receiver sends ACK for last pkt received OK receiver must explicitly include seq of pkt being

ACKed

duplicate ACK at sender results in same action as NAK retransmit current pkt

This is important because TCP uses this approach (NO NAC)

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 32: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-32

rdt22 sender receiver fragments

Wait for call 0 from

above

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)

rdt_send(data)

udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) || isACK(rcvpkt1) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

Wait for ACK

0

sender FSMfragment

Wait for 0 from below

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp has_seq1(rcvpkt)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(ACK1 chksum)udt_send(sndpkt)

rdt_rcv(rcvpkt) ampamp (corrupt(rcvpkt) || has_seq1(rcvpkt))

udt_send(sndpkt)

receiver FSMfragment

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 33: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-33

rdt30 channels with errors and loss

New assumption underlying channel can also lose packets (data or ACKs) checksum seq

ACKs retransmissions will be of help but not enough

Q how to deal with loss sender waits until

certain data or ACK lost then retransmits

yuck drawbacks

Approach sender waits ldquoreasonablerdquo amount of time for ACK

retransmits if no ACK received in this time

if pkt (or ACK) just delayed (not lost) retransmission will be

duplicate but use of seq rsquos already handles this

receiver must specify seq of pkt being ACKed

requires countdown timer

What is the ldquoright valuerdquo for timer It depends on the flow and network condition

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 34: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-34

rdt30 sender

sndpkt = make_pkt(0 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

Wait for

ACK0

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt1) )

Wait for call 1 from

above

sndpkt = make_pkt(1 data checksum)udt_send(sndpkt)start_timer

rdt_send(data)

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt0)

rdt_rcv(rcvpkt) ampamp ( corrupt(rcvpkt) ||isACK(rcvpkt0) )

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt) ampamp isACK(rcvpkt1)

stop_timerstop_timer

udt_send(sndpkt)start_timer

timeout

udt_send(sndpkt)start_timer

timeout

rdt_rcv(rcvpkt)

Wait for call 0from

above

Wait for

ACK1

rdt_rcv(rcvpkt)

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 35: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-35

rdt30 in action

Timer ticktickhellip

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 36: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-36

rdt30 in action

Is it necessary to send Ack1 again

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 37: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-37

Performance of rdt30

rdt30 works but performance stinks example 1 Gbps link 15 ms e-e prop delay 1KB packet

Ttransmi

t

= 8kbpkt109 bsec

= 8 microsec

U sender utilization ndash fraction of time sender busy sending

1KB pkt every 30 msec -gt 33kBsec thruput over 1 Gbps link

network protocol limits use of physical resources

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

L (packet length in bits)R (transmission rate bps)

=

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 38: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-38

rdt30 stop-and-wait operation

first packet bit transmitted t = 0

sender receiver

RTT

last packet bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

U sender

= 008

30008 = 000027

microseconds

L R

RTT + L R =

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 39: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-39

Pipelined protocols

Pipelining sender allows multiple ldquoin-flightrdquo yet-to-be-acknowledged pkts range of sequence numbers must be

increased buffering at sender andor receiver

Two generic forms of pipelined protocols go-Back-N selective repeat

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 40: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-40

Pipelining increased utilization

first packet bit transmitted t = 0

sender receiver

RTT

last bit transmitted t = L R

first packet bit arriveslast packet bit arrives send ACK

ACK arrives send next packet t = RTT + L R

last bit of 2nd packet arrives send ACKlast bit of 3rd packet arrives send ACK

U sender

= 024

30008 = 00008

microseconds

3 L R

RTT + L R =

Increase utilizationby a factor of 3

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 41: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-41

Go-Back-N (sliding window protocol)

Sender k-bit seq in pkt header ldquowindowrdquo of up to N consecutive unackrsquoed pkts allowed

ACK(n) ACKs all pkts up to including seq n - ldquocumulative ACKrdquo Sender may receive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n) retransmit pkt n and all higher seq pkts in

window

(For now treat seq as unlimited)

Q what happen when a receiver is totally disconnectedMAX RETRY

DONrsquoT FALL ASLEEP

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 42: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-42

GBN sender extended FSM

Wait start_timerudt_send(sndpkt[base])udt_send(sndpkt[base+1])hellipudt_send(sndpkt[nextseqnum-1])

timeout

rdt_send(data)

if (nextseqnum lt base+N) sndpkt[nextseqnum] = make_pkt(nextseqnumdatachksum) udt_send(sndpkt[nextseqnum]) if (base == nextseqnum) start_timer nextseqnum++ else refuse_data(data)

base = getacknum(rcvpkt)+1If (base == nextseqnum) stop_timer else start_timer

rdt_rcv(rcvpkt) ampamp notcorrupt(rcvpkt)

base=1nextseqnum=1

rdt_rcv(rcvpkt) ampamp corrupt(rcvpkt)

Buffer data or block higher app

No pkt in pipe

Reset timer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 43: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-43

GBN receiver extended FSM

ACK-only always send ACK for correctly-received pkt with highest in-order seq may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt discard (donrsquot buffer) -gt no receiver buffering Re-ACK pkt with highest in-order seq

Wait

udt_send(sndpkt)

default

rdt_rcv(rcvpkt) ampamp notcurrupt(rcvpkt) ampamp hasseqnum(rcvpktexpectedseqnum)

extract(rcvpktdata)deliver_data(data)sndpkt = make_pkt(expectedseqnumACKchksum)udt_send(sndpkt)expectedseqnum++

expectedseqnum=1sndpkt = make_pkt(expectedseqnumACKchksum)

If in order pkt is received deliver to app and ack Else just drop it

We donrsquot even

need timer at th

e

receiver Simply

rely on the retx by

the sender

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 44: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-44

GBN in action

What determine the size of window

1 RTT2 Buffer at the

receiver(flow control)

3 Network congestion

Q GBN has poor performance HowSender sends pkt 123456789 pkt 1 got lost receiver got pkt 2345hellip but will discard them

Window size=N=4

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 45: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-45

Selective Repeat (improvement of the GBN Protocol) receiver individually acknowledges all

correctly received pkts buffers pkts as needed for eventual in-order

delivery to upper layer Eg sender pkt 1234hellip10 receiver got

246810 Sender resends 13579

sender only resends pkts for which ACK not received sender timer for EACH unACKed pkt

sender window N consecutive seq rsquos again limits seq s of sent unACKed pkts

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 46: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-46

Selective repeat sender receiver windows

Q why we have this

Ack is lost or ack is on its way

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 47: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-47

Selective repeat

data from above if next available seq in

window send pkt

timeout(n) for pkt n resend pkt n restart

timer

ACK(n) in [sendbasesendbase+N]

mark pkt n as received if n smallest unACKed

pkt advance window base to next unACKed seq

senderpkt n in [rcvbase

rcvbase+N-1]

send ACK(n) out-of-order buffer in-order deliver (also

deliver buffered in-order pkts) advance window to next not-yet-received pkt

pkt n in [rcvbase-Nrcvbase-1]

ACK(n)

otherwise ignore

receiver

(slide the window)

Q why we need this

The ack got lost Sender may timeout resend pkt we need to ack

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 48: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-48

Selective repeat in action (N=4)

Under GBN this pkt will be dropped

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 49: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-49

Selective repeat dilemma

Example seq rsquos 0 1 2 3 window size (N)=3

receiver sees no difference in two scenarios

incorrectly passes duplicate data as new in (a)

Q what relationship between seq size and window size

N lt= 2^k2

In real life we use k-bits to implement seq Practical issue

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 50: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-50

Why bother study reliable data transfer

We know it is provided by TCP so why bother to study

Sometimes we may need to implement ldquosome formrdquo of reliable transfer without the heavy duty TCP

A good example is multimedia streaming Even though the application is loss tolerant but if too many packets got lost it affects the visual quality So we may want to implement some for of reliable transfer

At the very least appreciate the ldquogood servicesrdquo provided by some Internet gurus

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 51: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-51

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 52: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-52

TCP Overview RFCs 793 1122 1323 2018 2581

full duplex data bi-directional data flow

in same connection MSS maximum

segment size connection-oriented

handshaking (exchange of control msgs) initrsquos sender receiver state (eg buffer size) before data exchange

flow controlled sender will not

overwhelm receiver

point-to-point one sender one

receiver (not multicast) reliable in-order byte

steam no ldquomessage

boundariesrdquo In App layer we need

delimiters pipelined

TCP congestion and flow control set window size

send amp receive bufferssocketdoor

T C Psend buffer

T C Preceive buffer

socketdoor

segm ent

applicationwrites data

applicationreads data

(The 800 lbs gorilla in the transport stack PAY ATTENTION)

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 53: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-53

TCP segment structure

source port dest port

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberReceive window

Urg data pnterchecksum

FSRPAUheadlen

notused

Options (variable length)

URG urgent data (generally not used)

ACK ACK valid

PSH push data now(generally not used)

RST SYN FINconnection estab(setup teardown

commands)

bytes rcvr willingto accept

countingby ldquobytesrdquo of data(not segments)

Internetchecksum

(as in UDP)

Due to this field we have a variable length header

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 54: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-54

TCP seq rsquos and ACKsSeq rsquos

byte stream ldquonumberrdquo of first byte in segmentrsquos data

ACKs seq of next byte

expected from other side

cumulative ACKQ how receiver

handles out-of-order segments A TCP spec

doesnrsquot say - up to implementor

Host A Host B

Seq=42 ACK=79 data = lsquoCrsquo

Seq=79 ACK=43 data = lsquoCrsquo

Seq=43 ACK=80

Usertypes

lsquoCrsquo

host ACKsreceipt

of echoedlsquoCrsquo

host ACKsreceipt of

lsquoCrsquo echoesback lsquoCrsquo

timesimple telnet scenario

Negotiate during 3-way handshake

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 55: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-55

TCP Round Trip Time and TimeoutQ how to set TCP

timeout value longer than RTT

but RTT varies too short premature

timeout unnecessary

retransmissions too long slow reaction

to segment loss poor performance

Q how to estimate RTT SampleRTT measured time

from segment transmission until ACK receipt ignore retransmissions

SampleRTT will vary want estimated RTT ldquosmootherrdquo average several recent

measurements not just current SampleRTT

tx

retx

ack

Estimated RTT

tx

retx

ack

Estimated RTT

Too long

Too short

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 56: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-56

TCP Round Trip Time and TimeoutEstimatedRTT = (1- )EstimatedRTT + SampleRTT

Exponential weighted moving average influence of past sample decreases

exponentially fast typical value = 0125

ERTT(0) = 0

ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)

ERTT(2) =(1- ) SRTT(1) + SRTT(2)

ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 57: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-57

Example RTT estimationRTT gaiacsumassedu to fantasiaeurecomfr

100

150

200

250

300

350

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106

time (seconnds)

RTT

(mill

isec

onds

)

SampleRTT Estimated RTT

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 58: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-58

TCP Round Trip Time and TimeoutSetting the timeout (by JacobsonKarel) EstimtedRTT plus ldquosafety marginrdquo

large variation in EstimatedRTT -gt larger safety margin first estimate of how much SampleRTT deviates from

EstimatedRTT

TimeoutInterval = EstimatedRTT + 4DevRTT

DevRTT = (1-)DevRTT + |SampleRTT-EstimatedRTT|

(typically = 025)

Then set timeout interval

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 59: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-59

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 60: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-60

TCP reliable data transfer

TCP creates rdt service on top of IPrsquos unreliable service

Pipelined segments (for performance)

Cumulative acks TCP uses single

retransmission timer

Retransmissions are triggered by timeout events duplicate ack ( for

performance reason)

Initially consider simplified TCP sender ignore duplicate

acks ignore flow control

congestion control

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 61: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-61

TCP sender eventsdata rcvd from app Create segment with

seq seq is byte-stream

number of first data byte in segment

start timer if not already running (think of timer as for oldest unacked segment)

expiration interval TimeOutInterval

timeout retransmit segment

that caused timeout restart timer Ack rcvd If acknowledges

previously unacked segments update what is

known to be acked start timer if there

are outstanding segments

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 62: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-62

TCP sender(simplified)

NextSeqNum = InitialSeqNum SendBase = InitialSeqNum

loop (forever) switch(event)

event data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data)

event timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer

end of loop forever

Commentbull SendBase-1 last cumulatively ackrsquoed byteExamplebull SendBase-1 = 71y= 73 so the rcvrwants 73+ y gt SendBase sothat new data is acked

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 63: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-63

TCP retransmission scenarios

Host A

Seq=100 20 bytes data

ACK=100

timepremature timeout

Host B

Seq=92 8 bytes data

ACK=120

Seq=92 8 bytes data

Seq=

92

tim

eout

ACK=120

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

lost ACK scenario

Host B

X

Seq=92 8 bytes data

ACK=100

time

Seq=

92

tim

eout

SendBase= 100

SendBase= 120

SendBase= 120

Sendbase= 100

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 64: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-64

TCP retransmission scenarios (more)

Host A

Seq=92 8 bytes data

ACK=100

loss

tim

eout

Cumulative ACK scenario

Host B

X

Seq=100 20 bytes data

ACK=120

time

SendBase= 120

Room for improvement

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 65: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-65

TCP ACK generation [RFC 1122 RFC 2581]

Event at Receiver

Arrival of in-order segment withexpected seq All data up toexpected seq already ACKed

Arrival of in-order segment withexpected seq One other segment has ACK pending

Arrival of out-of-order segmenthigher-than-expect seq Gap detected

Arrival of segment that partially or completely fills gap

TCP Receiver action

Delayed ACK Wait up to 500msfor next segment If no next segmentsend ACK

Immediately send single cumulative ACK ACKing both in-order segments

Immediately send duplicate ACK indicating seq of next expected byte

Immediate send ACK provided thatsegment startsat lower end of gap

Ack the ldquolargest in-order byterdquo seq

REDUCE FEEDBACK TRAFFIC BY HALF

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 66: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-66

Fast Retransmit

Time-out period often relatively long long delay before

resending lost packet

Detect lost segments via duplicate ACKs Sender often sends

many segments back-to-back

If segment is lost there will likely be many duplicate ACKs

If sender receives 3 ACKs for the same data it supposes that segment after ACKed data was lost fast retransmit

resend segment before timer expires

timeout

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 67: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-67

event ACK received with ACK field value of y if (y gt SendBase) SendBase = y if (there are currently not-yet-acknowledged segments) start timer else increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) resend segment with sequence number y

Fast retransmit algorithm

a duplicate ACK for already ACKed segment

fast retransmitQ why resend pkt

with seq y

A That is what the receiver expect

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 68: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-68

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 69: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-69

TCP Flow Control

receive side of TCP connection has a receive buffer

speed-matching service matching the send rate to the receiving apprsquos drain rate app process may

be slow at reading from buffer

sender wonrsquot overflow

receiverrsquos buffer bytransmitting too

much too fast

flow control

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 70: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-70

TCP Flow control how it works

(Suppose TCP receiver discards out-of-order segments)

spare room in buffer= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

Rcvr advertises spare room by including value of RcvWindow in segments

Sender limits unACKed data to RcvWindow guarantees receive

buffer doesnrsquot overflow

This goes to show that the design process of header is important

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 71: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-71

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 72: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-72

TCP Connection Management

Recall TCP sender receiver establish ldquoconnectionrdquo before exchanging data segments

initialize TCP variables seq s buffers flow control info

(eg RcvWindow) client connection initiator Socket clientSocket = new

Socket(hostnameport

number) server contacted by client Socket connectionSocket =

welcomeSocketaccept()

Three way handshake

Step 1 client host sends TCP SYN segment to server specifies initial seq no data

Step 2 server host receives SYN replies with SYN-ACK segment

server allocates buffers specifies server initial seq

Step 3 client receives SYN-ACK

replies with ACK segment which may contain data

IMPORTANT SYN FLOODING

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 73: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-73

TCP three-way handshake

Connection

request Connection granted

SYN=1 seq=client_isn

SYN=1 seq=server_isn

Ack=client_isn+1

Syn=0 seq=client_isn+1

Ack=server_isn+1

ACK

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 74: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-74

TCP Connection Management (cont)

Closing a connection

client closes socket clientSocketclose()

Step 1 client end system sends TCP FIN control segment to server

Step 2 server receives FIN replies with ACK Closes connection sends FIN

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

Q why donrsquot we combine ACK and FIN

Sender may have some data in the pipeline

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 75: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-75

TCP Connection Management (cont)

Step 3 client receives FIN replies with ACK

Enters ldquotimed waitrdquo - will respond with ACK to received FINs

Step 4 server receives ACK Connection closed

Note with small modification can handle simultaneous FINs

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 76: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-76

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 77: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-77

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 78: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-78

Principles of Congestion Control

Congestion informally ldquotoo many sources sending

too much data too fast for network to handlerdquo

different from flow control manifestations

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

another top-10 problem

TCP provides one of the MANY WAYS to perform CC

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 79: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-79

Causescosts of congestion scenario 1

two senders two receivers (homogeneous)

one router infinite buffers capacity C

no retransmission

large delays when congested

maximum achievable throughput

unlimited shared output link buffers

Host Ain original data

Host B

out

From application To application

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 80: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-80

Causescosts of congestion scenario 2

one router finite buffers sender retransmission of lost

packet

finite shared output link buffers

Host A in original data

Host B

out

in original data plus retransmitted data

Due to transport layer

Pkt got dropped

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 81: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-81

Causescosts of congestion scenario 2 always (goodput)

ldquoperfectrdquo retransmission only when loss

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

gt

in

out

ldquocostsrdquo of congestion more work (retrans) for given ldquogoodputrdquo unneeded retransmissions link carries multiple copies of

pkt

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 82: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-82

Causescosts of congestion scenario 3 four senders multihop paths timeoutretransmit

in

Q what happens as and increase

in

finite shared output link buffers

Host Ain original data

Host B

out

in original data plus retransmitted data

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 83: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-83

Causescosts of congestion scenario 3

Another ldquocostrdquo of congestion when packet dropped any ldquoupstream

transmission capacity used for that packet was wasted

Host A

Host B

o

u

t

System collapses (eg students)

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 84: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-84

Approaches towards congestion control

End-end congestion control

no explicit feedback from network

congestion inferred from end-system observed loss delay

approach taken by TCP

Network-assisted congestion control

routers provide feedback to end systems single bit indicating

congestion (SNA DECbit TCPIP ECN ATM)

explicit rate sender should send at

Two broad approaches towards congestion control

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 85: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-85

Case study ATM ABR congestion control

ABR available bit rate

ldquoelastic servicerdquo if senderrsquos path

ldquounderloadedrdquo sender should use

available bandwidth if senderrsquos path

congested sender throttled to

minimum guaranteed rate

RM (resource management) cells

sent by sender interspersed with data cells

bits in RM cell set by switches (ldquonetwork-assistedrdquo) NI bit no increase in rate

(mild congestion) CI bit congestion indication

RM cells returned to sender by receiver with bits intact

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 86: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-86

Case study ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell senderrsquo send rate thus minimum supportable rate on path

EFCI (explicit forward congestion indication) bit in cells set to 1 in congested switch if data cell preceding RM cell has EFCI set receiver sets CI

bit in returned RM cell

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 87: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-87

Chapter 3 outline

31 Transport-layer services

32 Multiplexing and demultiplexing

33 Connectionless transport UDP

34 Principles of reliable data transfer

35 Connection-oriented transport TCP segment structure reliable data transfer flow control connection

management

36 Principles of congestion control

37 TCP congestion control

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 88: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-88

TCP Congestion Control

end-end control (no network assistance)

sender limits transmission LastByteSent-LastByteAcked

CongWin Roughly

CongWin is dynamic function of perceived network congestion

How does sender perceive congestion

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms AIMD slow start conservative after

timeout events

rate = CongWin

RTT Bytessec

Note CC must be efficient to make use of available BW

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 89: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-89

TCP AIMD

8 Kbytes

16 Kbytes

24 Kbytes

time

congestionwindow

multiplicative decrease cut CongWin in half after loss event

additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing

Long-lived TCP connection

Role of ACK1 An indication of

loss2 Ack is self

clocking if delay is large the rate of congestion window will be reduced

Human analogy HK government or CUHK

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 90: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-90

TCP Slow Start

When connection begins CongWin = 1 MSS Example MSS = 500

bytes amp RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be gtgt MSSRTT desirable to quickly

ramp up to respectable rate

When connection begins increase rate exponentially fast until first loss event

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 91: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-91

TCP Slow Start (more)

When connection begins increase rate exponentially until first loss event double CongWin every

RTT done by incrementing CongWin for every ACK received

Summary initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 92: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-92

Refinement After 3 dup ACKs

CongWin is cut in half window then grows linearly This is known as ldquofast-recoveryrdquo

phase Implemented in new TCP-Reno

But after timeout event CongWin instead set to 1 MSS window then grows exponentially

threshold then grows linearly

bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo

Philosophy

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 93: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-93

Refinement (more)Q When should the

exponential increase switch to linear

A When CongWin gets to 12 of its value before timeout

Implementation Variable Threshold At loss event Threshold

is set to 12 of CongWin just before loss event

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Transmission round

con

ge

stio

n w

ind

ow

siz

e

(se

gm

en

ts)

threshold

TCP Tahoe

TCP Reno

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 94: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-94

Summary TCP Congestion Control When CongWin is below Threshold sender

in slow-start phase window grows exponentially

When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly

When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold

When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS

READ THE BOOK on TCP-Vegas instead of react to a loss anticipate amp prepare for a loss

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 95: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-95

TCP sender congestion control

Event State TCP Sender Action Commentary

ACK receipt for previously unacked data

Slow Start (SS)

CongWin = CongWin + MSS If (CongWin gt Threshold) set state to ldquoCongestion Avoidancerdquo

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked data

CongestionAvoidance (CA)

CongWin = CongWin+MSS (MSSCongWin)

Additive increase resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple duplicate ACK

SS or CA Threshold = CongWin2 CongWin = ThresholdSet state to ldquoCongestion Avoidancerdquo

Fast recovery implementing multiplicative decrease CongWin will not drop below 1 MSS

Timeout SS or CA Threshold = CongWin2 CongWin = 1 MSSSet state to ldquoSlow Startrdquo

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 96: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-96

TCP throughput

Whatrsquos the average throughout ot TCP as a function of window size and RTT Ignore slow start

Let W be the window size when loss occurs

When window is W throughput is WRTT

Just after loss window drops to W2 throughput to W2RTT

Average throughout 75 WRTT

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 97: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-97

TCP Futures

Example 1500 byte segments 100ms RTT want 10 Gbps throughput

Requires window size W = 83333 in-flight segments

Throughput in terms of loss rate

L = 210-10 Wow New versions of TCP for high-speed

needed

LRTT

MSS221

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 98: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-98

Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

TCP Fairness

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 99: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-99

Why is TCP fair

Two competing homogeneous sessions (eg similar propagation delayetc)

Additive increase gives slope of 1 as throughout increases

multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputCon

necti

on

2 t

hro

ug

hp

ut

congestion avoidance additive increase

loss decrease window by factor of 2congestion avoidance additive

increase

loss decrease window by factor of 2

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 100: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-100

Fairness (more)

Fairness and UDP Multimedia apps

often do not use TCP do not want rate

throttled by congestion control

Instead use UDP pump audiovideo

at constant rate tolerate packet loss

Research area TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts

Web browsers do this Example link of rate R

supporting 9 connections new app asks for 1 TCP

gets rate R10 new app asks for 11 TCPs

gets around R2

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 101: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-101

Delay modeling

Q How long does it take to receive an object from a Web server after sending a request

Delay is influenced by TCP connection

establishment data transmission delay slow start Senderrsquos congestion

window

Notation assumptions Assume one link between client and

server of rate R S MSS (bits) O object size (bits) no retransmissions (no loss no

corruption) Protocolrsquos overhead is negligible

Window size First assume fixed congestion

window W segments Then dynamic window modeling

slow start

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 102: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-102

Fixed congestion window (1) assume no congestion window constratint

First caseWSR gt RTT + SR ACK

for first segment in window returns before windowrsquos worth of data sent

delay = 2RTT + OR

Object request is

piggybacked

This is the ldquolower boundrdquo of latency

Let W denote a lsquofixedrsquo congestion window size (a positive integer)

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 103: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-103

Fixed (or static) congestion window (2)

Second case WSR lt RTT + SR

wait for ACK after sending windowrsquos worth of data sent

delay = 2RTT + OR+ (K-1)[SR + RTT - WSR]

Where K be the of windows of data that cover the objectIf O is the ldquosizerdquo of the object we have K = OWS (round to an integer)

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 104: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-104

TCP Delay Modeling Slow Start (1)

Now suppose window grows according to slow start

Will show that the delay for one object is

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP idles at server

1min KQP

- where Q is the number of times the server idles if the object were of infinite size

- and K is the number of windows that cover the object

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 105: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-105

TCP Delay Modeling Slow Start (2)

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Examplebull OS = 15 segmentsbull K = 4 windowsbull Q = 2bull P = minK-1Q = 2

Server idles P=2 times

Delay componentsbull 2 RTT for connection establishment and requestbull OR to transmit objectbull time server idles due to slow start

Server idles P = minK-1Q times

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 106: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-106

TCP Delay Modeling (3)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

idleTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2delay

1

1

1

th window after the timeidle 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S R

second w indow= 2S R

third w indow= 4S R

fourth w indow= 8S R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Where k=12hellipK

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 107: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-107

TCP Delay Modeling (4)

)1(log

)1(logmin

12min

222min

222min

2

2

110

110

S

OS

Okk

S

Ok

SOk

OSSSkK

k

k

k

Calculation of Q number of idles for infinite-size objectis similar (see HW)

Recall K = number of windows that cover object

How do we calculate K

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 108: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-108

Experiment A

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 286 sec 1 288 sec 289 sec

100 kbps 80 sec 2 82 sec 84 sec

1 Mbps 800 msec 5 10 sec 15 sec

10 Mbps 80 msec 7 028 sec 098 sec

S= 536 bytes RTT=100 msec O=100 kbytes (relatively large)

1 Slow start adds appreciable delay only when R is high If R is low ACK comes back quickly and TCP quickly ramps up to its maximum rate

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 109: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-109

Experiment B

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 1 163 sec 173 sec

100 kbps 04 sec 2 06 sec 076 sec

1 Mbps 40 msec 3 024 sec 052 sec

10 Mbps 4 msec 3 020 sec 050 sec

S= 536 bytes RTT=100 msec O = 5 kbytes (relatively small)

1 Slow start adds appreciable delay when R is high and for a relatively small object

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 110: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-110

Experiment C

R OR P Min latencyOR+2 RTT

Latency withSlow start

28 kbps 143 sec 3 34 sec 58 sec

100 kbps 04 sec 3 24 sec 52 sec

1 Mbps 40 msec 3 20 sec 50 sec

10 Mbps 4 msec 3 20 sec 50 sec

S= 536 bytes RTT=1 sec O = 5 kbytes (relatively small)

1 Slow start can significantly increase the latency when the object size is relatively small and the RTT is relatively large

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 111: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-111

HTTP Modeling Assume Web page consists of

1 base HTML page (of size O bits) M images (each of size O bits)

Non-persistent HTTP M+1 TCP connections in series Response time = (M+1)OR + (M+1)2RTT + sum of

idle times Persistent HTTP

2 RTT to request and receive base HTML file 1 RTT to request and receive M images Response time = (M+1)OR + 3RTT + sum of idle

times Non-persistent HTTP with X parallel connections

Suppose MX integer 1 TCP connection for base file MX sets of parallel connections for images Response time = (M+1)OR + (MX + 1)2RTT + sum of

idle times

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 112: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-112

02468

101214161820

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT = 100 msec O = 5 Kbytes M=10 and X=5

For low bandwidth connection amp response time dominated by transmission time

Persistent connections only give minor improvement over parallel connections

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 113: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-113

0

10

20

30

40

50

60

70

28Kbps

100Kbps

1Mbps

10Mbps

non-persistent

persistent

parallel non-persistent

HTTP Response time (in seconds)RTT =1 sec O = 5 Kbytes M=10 and

X=5

For larger RTT response time dominated by TCP establishment amp slow start delays Persistent connections now give important improvement particularly in high delaybandwidth networks

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary
Page 114: 3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m Multiplexing and demultiplexing m reliable data transfer

3-114

Chapter 3 Summary principles behind transport

layer services multiplexing

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP

Next leaving the

network ldquoedgerdquo (application transport layers)

into the network ldquocorerdquo

  • Chapter 3 Transport Layer
  • Chapter 3 outline
  • Transport services and protocols
  • Transport vs network layer
  • Internet transport-layer protocols
  • Slide 6
  • Multiplexingdemultiplexing
  • How demultiplexing works
  • Connectionless demultiplexing
  • Connectionless demux (cont)
  • Connection-oriented demux
  • Connection-oriented demux (cont)
  • Slide 13
  • UDP User Datagram Protocol [RFC 768]
  • UDP more
  • UDP checksum
  • Slide 17
  • Slide 18
  • Principles of Reliable data transfer
  • Reliable data transfer getting started
  • Slide 21
  • Rdt10 reliable transfer over a reliable channel
  • Rdt20 channel with bit errors
  • rdt20 FSM specification
  • rdt20 operation with no errors
  • rdt20 error scenario
  • rdt20 has a fatal flaw
  • rdt21 sender handles garbled ACKNAKs
  • rdt21 receiver handles garbled ACKNAKs
  • rdt21 discussion
  • rdt22 a NAK-free protocol
  • rdt22 sender receiver fragments
  • rdt30 channels with errors and loss
  • rdt30 sender
  • rdt30 in action
  • Slide 36
  • Performance of rdt30
  • rdt30 stop-and-wait operation
  • Pipelined protocols
  • Pipelining increased utilization
  • Go-Back-N (sliding window protocol)
  • GBN sender extended FSM
  • GBN receiver extended FSM
  • GBN in action
  • Selective Repeat (improvement of the GBN Protocol)
  • Selective repeat sender receiver windows
  • Selective repeat
  • Selective repeat in action (N=4)
  • Selective repeat dilemma
  • Why bother study reliable data transfer
  • Slide 51
  • TCP Overview RFCs 793 1122 1323 2018 2581
  • TCP segment structure
  • TCP seq rsquos and ACKs
  • TCP Round Trip Time and Timeout
  • Slide 56
  • Example RTT estimation
  • Slide 58
  • Slide 59
  • TCP reliable data transfer
  • TCP sender events
  • TCP sender (simplified)
  • TCP retransmission scenarios
  • TCP retransmission scenarios (more)
  • TCP ACK generation [RFC 1122 RFC 2581]
  • Fast Retransmit
  • Fast retransmit algorithm
  • Slide 68
  • TCP Flow Control
  • TCP Flow control how it works
  • Slide 71
  • TCP Connection Management
  • TCP three-way handshake
  • TCP Connection Management (cont)
  • Slide 75
  • TCP Connection Management (cont)
  • Slide 77
  • Principles of Congestion Control
  • Causescosts of congestion scenario 1
  • Causescosts of congestion scenario 2
  • Slide 81
  • Causescosts of congestion scenario 3
  • Slide 83
  • Approaches towards congestion control
  • Case study ATM ABR congestion control
  • Slide 86
  • Slide 87
  • TCP Congestion Control
  • TCP AIMD
  • TCP Slow Start
  • TCP Slow Start (more)
  • Refinement
  • Refinement (more)
  • Summary TCP Congestion Control
  • Slide 95
  • Slide 96
  • Slide 97
  • Slide 98
  • Why is TCP fair
  • Fairness (more)
  • Delay modeling
  • Fixed congestion window (1) assume no congestion window constratint
  • Fixed (or static) congestion window (2)
  • TCP Delay Modeling Slow Start (1)
  • TCP Delay Modeling Slow Start (2)
  • TCP Delay Modeling (3)
  • TCP Delay Modeling (4)
  • Experiment A
  • Experiment B
  • HTTP Modeling
  • Slide 112
  • Slide 113
  • Chapter 3 Summary