chapter 3 transport layermonet.skku.edu/wp-content/uploads/2016/09/ch3.transport... ·...
TRANSCRIPT
Networking Laboratory 1/52
Sungkyunkwan University
Copyright 2000-2012 Networking Laboratory 2018-Fall Computer Networks Copyright 2000-2018 Networking Laboratory
Chapter 3
Transport Layer
Prepared by D.T. Le and H. Choo
2018-Fall Computer Networks Networking Laboratory 2/163
Warriors of The Net
Video Content
► Explains in very rudiment basics and easy to grasp footage how internet,
or even networking, works.
► Link: https://www.youtube.com/watch?v=PBWhzz_Gn10
2018-Fall Computer Networks Networking Laboratory 3/163
Warriors of The Net
2018-Fall Computer Networks Networking Laboratory 4/163
Chapter 3 Outline
3.1 Introduction
3.2 Transport-Layer Protocols
3.3 User Datagram Protocol (UDP)
3.4 Transmission Control Protocol (TCP)
2018-Fall Computer Networks Networking Laboratory 5/163
The transport layer is located between the application layer and the
network layer
It provides a process-to-process communication between two
application processes running on different hosts
Communication is provided using a logical connection
► The two application processes assume that there is an imaginary direct
connection through which they can send and receive messages
3.1 Introduction
2018-Fall Computer Networks Networking Laboratory 6/163
3.1 Introduction
[Figure 3.1 Logical connection at the transport layer]
2018-Fall Computer Networks Networking Laboratory 7/163
3.1 Introduction Transport-Layer Services
The transport layer is responsible for providing services to the
application layer, it receives services from the networks layer
The services that can be provided by the transport layer are follows:
► Flow control
► Error control
► Combination of flow and error control
► Congestion control
► Connectionless and connection-
oriented services
► Process-to-process communication
► Addressing: port numbers
► ICANN* ranges
► Encapsulation and decapsulation
► Multiplexing and demultiplexing
(*) ICANN: Internet Corporation of Assigned Names and Numbers
2018-Fall Computer Networks Networking Laboratory 8/163
3.1 Introduction Transport-Layer Services: Process-to-Process communication
A process is an application-layer entity (running program) that uses the
services of the transport layer
What is the difference between host-to-host communication and
process-to-process communication?
► Network layer: deliver the message only to the destination computer (host)
► Transport layer: deliver the message to the appropriate process (process)
[Figure 3.2 Network layer versus transport layer]
2018-Fall Computer Networks Networking Laboratory 9/163
Practice Problem
Assume we have a set of dedicated computers in a system, each
designed to perform only a single task. Do we still need host-to-host
and process-to-process communication and two levels of addressing?
2018-Fall Computer Networks Networking Laboratory 10/163
3.1 Introduction Transport-Layer Services: Addressing: Port Numbers (1/3)
[Figure 3.3 Port numbers]
Operating systems today support both multiuser and multi-
programming environments
► For communication, we must define the local host, local process,
remote host, and remote process
► The local host and the remote host are defined using IP addresses
► The processes need second identifiers, called port numbers
In the TCP/IP protocol suite, they are integers between 0 and 65,535 (16bits)
2018-Fall Computer Networks Networking Laboratory 11/163
3.1 Introduction Transport-Layer Services: Addressing: Port Numbers (2/3)
Client process has the ephemeral port number
► It is greater than 1,023 for some client/server programs to work properly
Server process has the well-known port number
► TCP/IP has decided to use universal port numbers for servers
The IP addresses and port numbers play different roles in selecting the
final destination of data
► The destination IP address defines the host among the different hosts
in the world
► After the host has been selected, the port number defines one of the
processes on this particular host
2018-Fall Computer Networks Networking Laboratory 12/163
3.1 Introduction Transport-Layer Services: Addressing: Port Numbers (3/3)
[Figure 3.4 IP addresses versus port numbers]
2018-Fall Computer Networks Networking Laboratory 13/163
ICANN (Internet Corporation of Assigned Names and Numbers) has
divided the port numbers into three ranges
► 0~1023: Well known ports are assigned and controlled by ICANN
Reserved for use by well known application protocols
• FTP (21), SSH (22), telnet (23), HTTP (80)…
► 1024~49151: Registered ports are not assigned or controlled by ICANN
They can only be registered with ICANN to prevent duplication
► 49152~65535: Dynamic ports are neither controlled nor registered
They can be used as temporary or private port numbers
[Figure 3.5 ICANN ranges]
3.1 Introduction Transport-Layer Services: ICANN Ranges (1/3)
2018-Fall Computer Networks Networking Laboratory 14/163
Example 3.1: In UNIX, the well-known ports are stored in a file called
/etc/services. We can use the grep utility to extract the line
corresponding to the desired application
SNMP (see Chapter 9) uses two port numbers (161 and 162), each for
a different purpose
3.1 Introduction Transport-Layer Services: ICANN Ranges (2/3)
$grep tftp /etc/services
tftp 69/tcp
tftp 69/udp
$grep snmp /etc/services
snmp 161/tcp #Simple Net Mgmt Proto
snmp 161/udp #Simple Net Mgmt Proto
snmptrap 162/udp #Traps for SNMP
2018-Fall Computer Networks Networking Laboratory 15/163
Socket address: the combination of an IP address and a port number
To use the services of the transport layer in the Internet, we need client
socket address and server socket address
► The network-layer packet header and the transport-layer packet header
The first header contains the IP addresses
The second header contains the port numbers
[Figure 3.6 Socket address]
3.1 Introduction Transport-Layer Services: Socket Address
2018-Fall Computer Networks Networking Laboratory 16/163
Practice Problem
Assume you need to write and test a client-server application program
on two hosts you have at home.
► What is the range of port numbers you would choose for the client program?
► What is the range of port numbers you would choose for the server
program?
► Can the two port numbers be the same?
2018-Fall Computer Networks Networking Laboratory 17/163
To send a message from one process to another, the transport-layer
protocol encapsulates and decapsulates messages
Encapsulation happens at the sender site
Decapsulation happens at the receiver site
The packets at the transport layers are called user datagrams,
segments, or packets
[Figure 3.7 Encapsulation and decapsulation]
3.1 Introduction Transport-Layer Services: Encapsulation and Decapsulation
2018-Fall Computer Networks Networking Laboratory 18/163
Multiplexing (many-to-one)
► It is a method by which multiple analog message signals or digital data
streams are combined into one signal over a shared medium
► The transport layer at the source performs multiplexing
Demultiplexing (one-to-many)
► It is to separate two or more signals that have been multiplexed
► The transport layer at the destination performs demultiplexing
3.1 Introduction Transport-Layer Services: Multiplexing and Demultiplexing (1/2)
2018-Fall Computer Networks Networking Laboratory 19/163
[Figure 3.8 Multiplexing and demultiplexing]
3.1 Introduction Transport-Layer Services: Multiplexing and Demultiplexing (2/2)
2018-Fall Computer Networks Networking Laboratory 20/163
Flow control provides of managing the rate of data transmission
between producer and consumer to prevent losing the data items
at the consumer site
The packets are produced faster than they can be consumed
► The consumer can be overwhelmed
The packets are produced more slowly than they can be consumed
► The consumer must wait
3.1 Introduction Transport-Layer Services: Flow Control (1/4)
2018-Fall Computer Networks Networking Laboratory 21/163
Delivery of items from a producer to a consumer can occur in one of
two ways: pushing or pulling
► Pushing is the delivery of items from producer without a prior request from
the consumer
The consumer may be overwhelmed
Flow control to prevent discarding of the items
► Pulling is the delivery of items from producer after the consumer has
requested them
In this case, there is no need for flow control
3.1 Introduction Transport-Layer Services: Flow Control (2/4)
[Figure 3.9 Pushing or pulling]
2018-Fall Computer Networks Networking Laboratory 22/163
[Figure 3.10 Flow control at the transport layer]
3.1 Introduction Transport-Layer Services: Flow Control (3/4)
Flow control at transport layer
► Four entities: sender process, sender transport layer, receiver transport
layer, and receiver process
2018-Fall Computer Networks Networking Laboratory 23/163
3.1 Introduction Transport-Layer Services: Flow Control (4/4)
Although flow control can be implemented in several ways, one of the
solutions is normally to use two buffers: one at the sending transport
layer and the other at the receiving transport layer
► The buffer of the sending transport layer
Full: it informs the application layer to stop passing chunks of messages
Some vacancies: it informs the application layer that it can pass message
chunks again
► The buffer of the receiving transport layer
Full: it informs the sending transport layer to stop sending packets
Some vacancies: it informs the sending transport layer that it can send
packets again
2018-Fall Computer Networks Networking Laboratory 24/163
3.1 Introduction Transport-Layer Services: Error Control (1/3)
In the Internet, since the underlying network layer is unreliable, we need
to make the transport layer reliable if the application requires reliability
Error control at the transport layer is responsible for
► Detecting and discarding corrupted packets
► Keeping track of lost and discarded packets and resending them
► Recognizing duplicate packets and discarding them
► Buffering out-of-order packets until the missing packets arrive
[Figure 3.11 Error control at the transport layer]
2018-Fall Computer Networks Networking Laboratory 25/163
3.1 Introduction Transport-Layer Services: Error Control (2/3)
Sequence numbers
► Error control requires specifying
Corrupted or lost packets, duplicate packets, out-of-order packets and so on
► This can be done if the packets are numbered
► We can add a field to the transport layer packet to hold the sequence
number of the packet
If the header of the packet allows m bits for the sequence number,
the sequence numbers range from 0 to 2m -1
Example: if m is 4, the only sequence numbers are 0 through 15, inclusive
2018-Fall Computer Networks Networking Laboratory 26/163
3.1 Introduction Transport-Layer Services: Error Control (3/3)
We can use both positive and negative signals as error control
► Positive signal: packet arrived safe and sound
► Negative signal: packet lost or corrupted
Acknowledgment
► The positive signal for error control
► The receiver can send an acknowledgment (ACK) to the sender and simply
discard the corrupted packets
► The sender can detect lost packets if it uses a timer
When a packet is sent, the sender starts a timer
If an ACK does not arrive before the timer expires, the sender resends the packet
2018-Fall Computer Networks Networking Laboratory 27/163
3.1 Introduction Transport-Layer Services: Combination of Flow and Error Control (1/3)
The flow control requires the use of two buffers, one at the
sender site and the other at the receiver site
The error control requires the use of sequence and
acknowledgment numbers by both sides
These two requirements can be combined if we use two
numbered buffers: one at the sender, the other at the
receiver
2018-Fall Computer Networks Networking Laboratory 28/163
3.1 Introduction Transport-Layer Services: Combination of Flow and Error Control (2/3)
The buffer is represented as a set of slices, called the sliding window,
that occupies part of the circle at any time
[Figure 3.12 Sliding window in circular format]
2018-Fall Computer Networks Networking Laboratory 29/163
3.1 Introduction Transport-Layer Services: Combination of Flow and Error Control (3/3)
[Figure 3.13 Sliding window in linear format]
2018-Fall Computer Networks Networking Laboratory 30/163
3.1 Introduction Transport-Layer Services: Congestion Control
Congestion
► Congestion in a network may occur if the load on the network is greater
than the capacity of the network
Why is there congestion in a network?
► Routers and switches have queues that hold the packets before and after
processing
If a router cannot process the packets at the same rate at which they arrive, the
queues become overloaded and congestion occurs
Congestion at the transport layer is actually the result of congestion at
the network layer
2018-Fall Computer Networks Networking Laboratory 31/163
Connectionless service means independency between packets
The two transport layers do not coordinate with each other
The sending transport layer treats each chunk of data as a single unit without any
relation between the chunks
The receiving transport layer has no idea about packet lost or out of order
No flow control, error control,
or congestion control
[Figure 3.14 Connectionless service]
3.1 Introduction Transport-Layer Services: Connectionless and Connection-Oriented Services (1/4)
2018-Fall Computer Networks Networking Laboratory 32/163
Packet 2
[Figure 3.15 Connection-oriented service]
3.1 Introduction Transport-Layer Services: Connectionless and Connection-Oriented Services (2/4)
Connection-oriented service means dependency between packets
The client and the server first need to establish a logical connection
Flow control, error control,
and congestion control
2018-Fall Computer Networks Networking Laboratory 33/163
Finite State Machine (FSM)
► Using this tool, each transport layer (sender or receiver) is taught as a
machine with a finite number of states
► The machine is always in one of the states until an event occurs
► Each event is associated with two reactions
Defining the list (possibly empty) of actions to be performed
Determining the next state (which can be the same as the current state)
► One of the states must be defined as the initial state
3.1 Introduction Transport-Layer Services: Connectionless and Connection-Oriented Services (3/4)
2018-Fall Computer Networks Networking Laboratory 34/163
[Figure 3.16 Connectionless and connection-oriented service represented as FSMs]
3.1 Introduction Transport-Layer Services: Connectionless and Connection-Oriented Services (4/4)
2018-Fall Computer Networks Networking Laboratory 35/163
Video Content
► Transport layer protocols establish communication sections between
computers
► A port number is used to defined each application in a computer
► Link: https://www.youtube.com/watch?v=Vdc8TCESIg8
3.2 Transport-Layer Protocols Overview (1/2)
2018-Fall Computer Networks Networking Laboratory 36/163
3.2 Transport-Layer Protocols Overview (2/2)
2018-Fall Computer Networks Networking Laboratory 37/163
We can create a transport-layer protocol by combining a set of services
described in the previous sections
To better understand the behavior of these protocols, we start with the
simplest one and gradually add more complexity, including:
► Simple protocol
► Stop-and-Wait protocol
► Go-Back-N protocol
► Selective-Repeat protocol
The transport-layer protocol of the TCP/IP suite is either a modification
or a combination of some of these protocols.
3.2 Transport-Layer Protocols
2018-Fall Computer Networks Networking Laboratory 38/163
The simple protocol is a connectionless protocol with neither flow
nor error control
► Assume that the receiver can immediately handle any packet it receives
FSMs
[Figure 3.17 Simple protocol]
3.2 Transport-Layer Protocols Simple Protocol
[Figure 3.18 FSMs for the simple protocol]
2018-Fall Computer Networks Networking Laboratory 39/163
The sender sends packets one after another without even thinking
about the receiver
[Figure 3.19 Flow diagram for Example 3.3]
3.2 Transport-Layer Protocols Simple Protocol: Example 3.3
2018-Fall Computer Networks Networking Laboratory 40/163
The Stop-and-Wait protocol is a connection-oriented protocol, which
uses both flow and error control
► The sender sends one packet, starts a timer, and waits for an acknowledgment
If an acknowledgment arrives before the timer expires, the timer is stopped,
and the sender sends the next packet
If the timer expires, the sender resends the previous packet
► Checksum is added to detect corrupted packets
3.2 Transport-Layer Protocols Stop-and-Wait Protocol
[Figure 3.20 Stop-and-Wait protocol]
2018-Fall Computer Networks Networking Laboratory 41/163
3.2 Transport-Layer Protocols Stop-and-Wait Protocol: Sequence Numbers
To prevent duplicate packets, the protocol uses sequence numbers and
ACK numbers
Assume we have used x as a sequence number
► When the packet arrives safe at the receiver site
The ACK arrives at the sender site, causing the sender to send the next packet
numbered x+1
► When the packet is corrupted or never arrives at the receiver site
The sender resends the packet (numbered x) after the time-out
► When the ACK is corrupted or lost
The sender resends the packet (numbered x) after the time-out
Note that the packet here is a duplicate
We can see that there is a need for sequence numbers x and x+1
► We can let x=0 and x+1=1
► This means that the sequence is 0, 1, 0, 1, 0, and so on
► This is referred to as modulo 2 arithmetic
2018-Fall Computer Networks Networking Laboratory 42/163
3.2 Transport-Layer Protocols Stop-and-Wait Protocol: Acknowledgment Numbers
The sequence numbers must be suitable for both data packets and
acknowledgments
Convention : The acknowledgment numbers always announce the
sequence number of the next packet expected by the receiver
► For example, if packet 0 has arrived safe and sound, the receiver sends an
ACK with acknowledgment (0+1) mod 2 = 1 (meaning packet 1 is expected
next)
► If packet 1 has arrived safe and sound, the receiver sends an ACK with
acknowledgment (1+1) mod 2 = 0 (meaning packet 0 is expected next)
2018-Fall Computer Networks Networking Laboratory 43/163
3.2 Transport-Layer Protocols Stop-and-Wait Protocol: FSMs
[Figure 3.21 FSM for the Stop-and-Wait protocol]
2018-Fall Computer Networks Networking Laboratory 44/163
[Fig 3.22 Flow diagram for Example 3.4]
3.2 Transport-Layer Protocols Stop-and-Wait Protocol: Example 3.4
2018-Fall Computer Networks Networking Laboratory 45/163
3.2 Transport-Layer Protocols Stop-and-Wait Protocol: Efficiency
The Stop-and-Wait protocol is very inefficient if our channel
is thick and long
► By thick, we mean that our channel has a large bandwidth (high
data rate)
► By long, we mean the round-trip delay is long
► The product of these two is called the bandwidth-delay product
The bandwidth-delay product is a measure of the number of bits a
sender
can transmit through the system while waiting for an ACK
2018-Fall Computer Networks Networking Laboratory 46/163
3.2 Transport-Layer Protocols Stop-and-Wait Protocol: Example 3.5
Assume that, in a Stop-and-Wait system, the bandwidth of the line is
1 Mbps, and 1 bit takes 20 milliseconds to make a round trip
► What is the bandwidth-delay product?
► If the system data packets are 1,000 bits in length, what is the utilization
percentage of the link?
Solution
► The bandwidth-delay product is (1 × 106) × (20 × 10−3) = 20,000 bits
The system can send 20,000 bits during the time it takes for the data to go from
the sender to the receiver and the acknowledgment to come back
► The link utilization is only 1,000/20,000, or 5 percent
2018-Fall Computer Networks Networking Laboratory 47/163
3.2 Transport-Layer Protocols Stop-and-Wait Protocol: Example 3.6
What is the utilization percentage of the link in Example 3.5 if we have
a protocol that can send up to 15 packets before stopping and worrying
about the acknowledgments?
Solution
► The bandwidth-delay product is still 20,000 bits
The system can send up to 15 packets or 15,000 bits during a round trip
► This means the utilization is 15,000/20,000, or 75 percent
Of course, if there are damaged packets, the utilization percentage is much
less because packets have to be resent
2018-Fall Computer Networks Networking Laboratory 48/163
Pipelining: several packets can be sent before a sender receives
feedback about the previous packets
► Buffering at sender and receiver
There is no pipelining in the Stop-and-Wait protocol
► A sender must wait for a packet to reach the destination and be
acknowledged before the next packet can be sent
Two generic forms of pipelined protocol: Go-Back-N, Selective-Repeat
3.2 Transport-Layer Protocols Stop-and-Wait Protocol: Pipelining
2018-Fall Computer Networks Networking Laboratory 49/163
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN) (1/2)
To improve the efficiency of transmission (to fill the pipe), multiple
packets must be in transition while the sender is waiting for ACK
The key to Go-Back-N is that we can send several packets before
receiving ACKs, but the receiver can only buffer one packet
► We keep a copy of the sent packets until the ACKs arrive
► Note that several data packets and acknowledgments can be in the channel
at the same time
2018-Fall Computer Networks Networking Laboratory 50/163
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN) (2/2)
n
[Figure 3.23 Go-Back-N protocol]
2018-Fall Computer Networks Networking Laboratory 51/163
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Sequence Numbers & Acknowledgment Numbers
Sequence numbers
► The sequence numbers are modulo 2m, where m is the size of the sequence
number field in bits
Acknowledgment numbers
► An acknowledgment number (ackNo) is cumulative and defines the
sequence number of the next packet expected to arrive
For example, if the ackNo is 7, it means all packets with sequence number up
to 6 have arrived, safe and sound, and the receiver is expecting the packet with
sequence number 7
2018-Fall Computer Networks Networking Laboratory 52/163
The send window is an imaginary box covering the sequence numbers
of the data packets
► Sf: the sequence number of the first (oldest) outstanding packet
► Sn: the sequence number of the next packet to be sent
► Ssize: the size of the window
The maximum size of the window is 2m - 1
[Figure 3.24 Send window for Go-Back-N]
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Send Window (1/2)
2018-Fall Computer Networks Networking Laboratory 53/163
Sliding direction
[Figure 3.25 Sliding the send window]
The send window can slide one or more slots when an error-free ACK
with ackNo greater than or equal Sf and less than Sn arrives
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Send Window (2/2)
2018-Fall Computer Networks Networking Laboratory 54/163
[Figure 3.26 Receive window for Go-Back-N]
The receive window makes sure that the correct data packets are
received and that the correct acknowledgments are sent
► The receive window is an abstract concept defining an imaginary box of
size 1 with one single variable Rn
Rn: the sequence number of the next packet expected
Only a packet with a sequence number matching the value of Rn is accepted
and acknowledged
► The window slides when a correct packet has arrived
► Sliding occurs one slot at a time
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Receive Window
2018-Fall Computer Networks Networking Laboratory 55/163
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Timers & Resending packets
Timers
► Although there can be a timer for each packet that is sent, in GBN we use
only one
The timer for the first outstanding packet always expires first
Resending packets
► When the timer expires, the sender resends all outstanding packets
For example, suppose the sender has already sent packet 6 (Sn = 7),
but the only timer expires
If Sf = 3, this means that packets 3, 4, 5, and 6 have not been acknowledged
The sender goes back and resends packets 3, 4, 5, and 6
That is why the protocol is called Go-Back-N
► On a time-out, the machine goes back N locations and resends all packets
2018-Fall Computer Networks Networking Laboratory 56/163
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): FSMs
[Figure 3.27 FSMs for the Go-back-N protocol]
2018-Fall Computer Networks Networking Laboratory 57/163
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Send Window Size
In the Go-Back-N protocol, the size of the send window must be less
than 2m; the size of the receive window is always 1
► As an example, we choose m = 2, which means the size of the window can
be 2m − 1, or 3
[Figure 3.28 Send window size for Go-Back-N]
2018-Fall Computer Networks Networking Laboratory 58/163 [Figure 3.29 Flow diagram for Example 3.7]
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Example 3.7
Cumulative acknowledgments can help if acknowledgments are
delayed or lost
2018-Fall Computer Networks Networking Laboratory 59/163
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Example 3.8
[Figure 3.30 Flow diagram for Example 3.8]
2018-Fall Computer Networks Networking Laboratory 60/163
3.2 Transport-Layer Protocols Go-Back-N Protocol (GBN): Go-Back-N versus Stop-and-Wait
The Stop-and-Wait protocol is actually a Go-Back-N
protocol in which there are only two sequence numbers
and the send window size is 1
► In other words, m = 1 and 2m − 1 = 1
In Go-Back-N, we said that the arithmetic is modulo 2m
In Stop-and-Wait it is modulo 2, which is the same as 2m
when m = 1
2018-Fall Computer Networks Networking Laboratory 61/163
3.2 Transport-Layer Protocols Selective-Repeat Protocol
The Go-Back-N protocol simplifies the process at the receiver
► This protocol is inefficient if the underlying network protocol loses a lot of
packets
The Selective-Repeat (SR) protocol has been devised, resends only
selective packets, those that are actually lost
[Figure 3.31 Outline of Selective-Repeat]
2018-Fall Computer Networks Networking Laboratory 62/163
3.2 Transport-Layer Protocols Selective-Repeat Protocol: Windows (1/2)
There are differences between the windows in this protocol and the ones
in Go-Back-N protocol
► First, the maximum size of the send window is much smaller; it is 2m-1
► Second, the receive window is the same size as the send window
The send window maximum size can be 2m−1
► For example, if m = 4, the sequence numbers go from 0 to 15, but the
maximum size of the window is just 8 (it is 15 in the Go-Back-N Protocol)
[Figure 3.32 Send window for Selective-Repeat protocol]
2018-Fall Computer Networks Networking Laboratory 63/163
3.2 Transport-Layer Protocols Selective-Repeat Protocol: Windows (2/2)
The receive window in Selective-Repeat is totally different from the one
in Go-Back-N
► The size of the receive window is the same as the size of the send window
(maximum 2m−1)
Packets that have arrived out of order are waiting for the earlier
transmitted packet to arrive before delivery to the application layer
[Figure 3.33 Receive window for Selective-Repeat protocol]
2018-Fall Computer Networks Networking Laboratory 64/163
3.2 Transport-Layer Protocols Selective-Repeat Protocol: Timer & Acknowledgments
Timer
► GBN treats outstanding packets as a group; SR treats them individually
Most transport layer protocols that implement SR use only one single timer
Acknowledgments
► In GBN an ackNo is cumulative
It defines the sequence number of the next packet expected, confirming that all
previous packets have been received safe and sound
► In SR, an ackNo defines the sequence number of one single packet that is
received safe and sound
There is no feedback for any other
2018-Fall Computer Networks Networking Laboratory 65/163
3.2 Transport-Layer Protocols Selective-Repeat Protocol: Example 3.9
Assume a sender sends 6 packets: packets 0, 1, 2, 3, 4, and 5
► The sender receives an ACK with ackNo = 3
► What is the interpretation if the system is using GBN or SR?
Solution
► GBN
The packets 0, 1, and 2 have been received uncorrupted and the receiver is
expecting packet 3
► SR
The packet 3 has been received uncorrupted
The ACK does not say anything about other packets
2018-Fall Computer Networks Networking Laboratory 66/163
3.2 Transport-Layer Protocols Selective-Repeat Protocol: FSMs
[Figure 3.34 FSMs for SR protocol]
2018-Fall Computer Networks Networking Laboratory 67/163 [Figure 3.35 Flow diagram for Example 3.10]
3.2 Transport-Layer Protocols Selective-Repeat Protocol: Example 3.10
2018-Fall Computer Networks Networking Laboratory 68/163
3.2 Transport-Layer Protocols Selective-Repeat Protocol: Window Sizes
In Selective-Repeat, the size of the sender and receiver window can be
at most one-half of 2m
[Figure 3.36 Selective-Repeat, window size]
2018-Fall Computer Networks Networking Laboratory 69/163
Practice Problem
In a network, the size of the receive window is 1 packet. Which of the
following protocols is being used by the network? Explain your answer.
► Stop-and-Wait
► Go-Back-N
► Selective-Repeat
2018-Fall Computer Networks Networking Laboratory 70/163
3.2 Transport-Layer Protocols Bidirectional Protocols: Piggybacking (1/2)
The four protocols we discussed earlier in this section are all
unidirectional
► In real life, data packets are normally flowing in both directions: from client to
server and from server to client
► This means that acknowledgments also need to flow in both directions
A technique called piggybacking is used to improve the efficiency of the
bidirectional protocols
► When a packet is carrying data from A to B, it can also carry acknowledgment
feedback about arrived packets from B
► When a packet is carrying data from B to A, it can also carry acknowledgment
feedback about the arrived packets from A
► The client and server each use two independent windows: send and receive
2018-Fall Computer Networks Networking Laboratory 71/163
3.2 Transport-Layer Protocols Bidirectional Protocols: Piggybacking (2/2)
[Figure 3.37 Design of piggybacking in Go-Back-N]
2018-Fall Computer Networks Networking Laboratory 72/163
3.2 Transport-Layer Protocols Internet Transport-Layer Protocols (1/3)
A network is the interconnection of a set of devices capable of
communication
► In this definition, a device can be a host such as a large computer, desktop,
laptop, workstation, cellular phone, or security system
► A device in this definition can also be a connecting device such as a router,
a switch, a modem that changes the form of data, and so on
Although the Internet uses several transport-layer protocols, we discuss
only two of them
► User Datagram Protocol (UDP)
► Transmission Control Protocol (TCP)
2018-Fall Computer Networks Networking Laboratory 73/163
UDP is an unreliable connectionless transport-layer protocol used for its
simplicity and efficiency
TCP is a reliable connection-oriented protocol that can be used for its
reliability
[Figure 3.38 Position of transport-layer protocols in the TCP/IP protocol suite]
3.2 Transport-Layer Protocols Internet Transport-Layer Protocols (2/3)
2018-Fall Computer Networks Networking Laboratory 74/163
[Table 3.1 Some well-known ports used with UDP and TCP]
3.2 Transport-Layer Protocols Internet Transport-Layer Protocols (3/3)
2018-Fall Computer Networks Networking Laboratory 75/163
3.3 User Datagram Protocol (UDP) Overview (1/2)
Video Content
► The UDP is a connectionless, unreliable transport protocol
► It does not add anything to the services of IP except for providing process-
to-process communication instead of host-to-host communication
► UDP is a very simple protocol using a minimum of overhead
► If a process wants to send a small message and does not care much about
reliability, it can use UDP
► Link: https://www.youtube.com/watch?v=Vdc8TCESIg8
2018-Fall Computer Networks Networking Laboratory 76/163
3.3 User Datagram Protocol (UDP) Overview (2/2)
2018-Fall Computer Networks Networking Laboratory 77/163
3.3 User Datagram Protocol (UDP) User Datagram
UDP packets, called user datagrams, have a fixed size header of 8
bytes made of four fields, each of 2 bytes (16 bits 0~65,535 bytes)
► Source port: This field identifies the sending port; If not used, then it should
be zero
► Destination port: This field identifies the destination port and is required
► Total length: A 16-bit field that specifies the length in bytes of the entire
datagram (header and data)
► Checksum: The 16-bit checksum field is used for error-checking of the
header and data
[Figure 3.39 User datagram packet format]
2018-Fall Computer Networks Networking Laboratory 78/163
3.3 User Datagram Protocol (UDP) User Datagram: Example 3.11
The following is the contents of a UDP header in hexadecimal format
What is the source port number?
What is the destination port number?
What is the total length of the user datagram?
What is the length of the data?
Is the packet directed from a client to a server or vice versa?
What is the client process?
→ CB84, 52100
→ 000D, 13
→ 001C, 28 bytes
→ 28 – 8 = 20 bytes
→ From client to server
→ Daytime (see Table 3.1)
2018-Fall Computer Networks Networking Laboratory 79/163
3.3 User Datagram Protocol (UDP) UDP Services (1/7)
In this section, we discuss what portions of those general services are
provided by UDP
► Process-to-Process Communication
► Connectionless Services
► Flow Control
► Error Control
► Checksum
► Congestion Control
► Encapsulation and Decapsulation
► Queuing
► Multiplexing and Demultiplexing
Comparison : UDP and Simple Protocol
2018-Fall Computer Networks Networking Laboratory 80/163
3.3 User Datagram Protocol (UDP) UDP Services (2/7)
Process-to-process communication
► UDP provides process-to-process communication using socket addresses,
a combination of IP addresses and port numbers
Connectionless services
► UDP provides a connectionless service
Each user datagram sent by UDP is an independent datagram
The user datagrams are not numbered
There is no connection establishment and no connection termination
• This means that each user datagram can travel on a different path
► Each request must be small enough to fit into one user datagram
Less than 65,507 bytes (65,535 minus 8 bytes for the UDP header and minus 20
bytes for the IP header)
2018-Fall Computer Networks Networking Laboratory 81/163
3.3 User Datagram Protocol (UDP) UDP Services (3/7)
Flow control
► There is no flow control, and hence no window mechanism
► The receiver may overflow with incoming messages
Error control
► There is no error control mechanism in UDP except for the checksum
► The sender does not know if a message has been lost or duplicated
► When the receiver detects an error through the checksum, the user
datagram is silently discarded
2018-Fall Computer Networks Networking Laboratory 82/163
3.3 User Datagram Protocol (UDP) UDP Services (4/7)
Checksum
► UDP checksum calculation includes three sections: a pseudoheader is the
part of the header of the IP packet (discussed in Chapter 4)
The protocol field is added to ensure that the packet belongs to UDP (17)
► The sender of a UDP packet can choose not to calculate the checksum
[Figure 3.40 Pseudoheader for checksum calculation]
2018-Fall Computer Networks Networking Laboratory 83/163
3.3 User Datagram Protocol (UDP) UDP Services (5/7)
Example 3.12
► What value is sent for the checksum in one of the following hypothetical
situations?
a. The sender decides not to include the checksum
The value sent for the checksum field is all 0s
b. The sender decides to include the checksum, but the value of the sum is all 1s
When the sender complements the sum, the result is all 0s; the sender complements
the result again before sending. The value sent for the checksum is all 1s
c. The sender decides to include the checksum, but the value of the sum is all 0s
This situation never happens because it implies that the value of every term included
in the calculation of the sum is all 0s, which is impossible
2018-Fall Computer Networks Networking Laboratory 84/163
3.3 User Datagram Protocol (UDP) UDP Services (6/7)
Congestion control
► Since UDP is a connectionless protocol, it does not provide congestion
control
► UDP assumes that the packets sent are small and sporadic and cannot
create congestion in the network
Encapsulation and decapsulation
► To send a message from one process to another, the UDP protocol
encapsulates and decapsulates messages
Queuing
► In UDP, queues are associated with ports
Some implementations create both an incoming and an outgoing queue
associated with each process
Other implementations create only an incoming queue associated with each
process
2018-Fall Computer Networks Networking Laboratory 85/163
3.3 User Datagram Protocol (UDP) UDP Services (7/7)
Multiplexing and demultiplexing
► In a host running a TCP/IP protocol suite, there is only one UDP but
possibly several processes that may want to use the services of UDP
To handle this situation, UDP multiplexes and demultiplexes
Comparison between UDP and generic simple protocol
► The only difference is that UDP provides an optional checksum
added to packets for error detection
2018-Fall Computer Networks Networking Laboratory 86/163
3.3 User Datagram Protocol (UDP) UDP Applications
Although UDP meets almost none of the criteria we mentioned earlier
for a reliable transport-layer protocol, UDP is preferable for some
applications
In this section, we first discuss some features of UDP that may need to
be considered when one designs an application program and then show
some typical applications
2018-Fall Computer Networks Networking Laboratory 87/163
3.3 User Datagram Protocol (UDP) UDP Applications: UDP Features (1/5)
Connectionless service
► It is an advantage if, for example, a client application needs to send a short
request to a server and to receive a short response
► If delay is an important issue for the application, the connectionless service
is preferred
Example 3.13
► A client-server application such as DNS (see Chapter 2) uses the services
of UDP
In the DNS, a client needs to send a short request to a server and to receive a
quick response from it
The request and response can each fit in one user datagram
Since only one message is exchanged in each direction, the connectionless
feature is not an issue
• The client or server does not worry that messages are delivered out of order
2018-Fall Computer Networks Networking Laboratory 88/163
3.3 User Datagram Protocol (UDP) UDP Applications: UDP Features (2/5)
Example 3.14
► A client-server application such as SMTP (see Chapter 2), which is used in
electronic mail, cannot use the services of UDP
Because a user can send a long e-mail message, which may include multimedia
(images, audio, or video)
If the application uses UDP and the message does not fit in one single user
datagram, the message must be split by the application into different user
datagrams
► Here the connectionless service may create problems
The user datagrams may arrive and be delivered to the receiver application out
of order
The receiver application may not be able to reorder the pieces
► This means the connectionless service has a disadvantage for an
application program that sends long messages
2018-Fall Computer Networks Networking Laboratory 89/163
3.3 User Datagram Protocol (UDP) UDP Applications: UDP Features (3/5)
Lack of error control
► UDP does not provide error control; it provides an unreliable service
► When a transport layer provides reliable services, if a part of the message
is lost or corrupted, it needs to be resent
The receiving transport layer cannot deliver that part to the application
immediately
There is an uneven delay between different parts of the message delivered to
the application layer
► Some applications by nature do not even notice these uneven delays, but
for some they are very crucial
2018-Fall Computer Networks Networking Laboratory 90/163
3.3 User Datagram Protocol (UDP) UDP Applications: UDP Features (4/5)
Example 3.15
► Assume we are downloading a very large text file from the Internet
We definitely need to use a transport layer that provides reliable service
We don’t want part of the file to be missing or corrupted when we open the file
► The delay created between the deliveries of the parts is not an overriding
concern for us; we wait until the whole file is composed before looking at it
► In this case, UDP is not a suitable transport layer
2018-Fall Computer Networks Networking Laboratory 91/163
3.3 User Datagram Protocol (UDP) UDP Applications: UDP Features (5/5)
Example 3.16
► Assume we are using a real-time interactive application, such as Skype
Audio and video are divided into frames and sent one after another
► If the transport layer is supposed to resend a corrupted or lost frame, the
synchronizing of the whole transmission may be lost
► However, if each small part of the screen is sent using one single user
datagram, the receiving UDP can easily ignore the corrupted or lost packet
and deliver the rest to the application program
That part of the screen is blank for a very short period of time, which most
viewers do not even notice
2018-Fall Computer Networks Networking Laboratory 92/163
3.3 User Datagram Protocol (UDP) UDP Applications: Typical Applications
The following shows some typical applications that can benefit more
from the services of UDP than from those of TCP
► The Trivial File Transfer Protocol (TFTP) process includes flow and error
control; It can easily use UDP
► UDP is a suitable transport protocol for multicasting
► UDP is used for management processes such as SNMP (see Chapter 9)
► UDP is used for some route updating protocols such as Routing
Information Protocol (RIP) (see Chapter 4)
► UDP is normally used for interactive real-time applications (See Chapter 8)
2018-Fall Computer Networks Networking Laboratory 93/163
3.4 Transmission Control Protocol
(TCP) Overview (1/2)
Video Content
► Process-to-process communication
Using port numbers
► Connection-oriented
Handshaking (exchange of control msg) before exchange data
► Reliable and in-order byte stream
A combination of GBN and SR protocols
► Pipelined
► Sender and receiver’s Buffer
► Full duplex data
Bi-directional data flow in same connection
► Flow control
► Link: https://www.youtube.com/watch?v=Vdc8TCESIg8
2018-Fall Computer Networks Networking Laboratory 94/163
3.4 Transmission Control Protocol (TCP) Overview (2/2)
2018-Fall Computer Networks Networking Laboratory 95/163
Using port numbers
3.4.1 TCP Services (1/5) Process-to-Process Communication
[Table 3.1: Some well-known ports used with UDP and TCP]
2018-Fall Computer Networks Networking Laboratory 96/163
Allows the sending process to deliver data as a stream of bytes
Allows the receiving process to obtain data as a stream of bytes
The two processes seem to be connected by an imaginary “tube”
Sending process
Receiving process
Stream of bytes
[Figure 3.41 Stream delivery]
3.4.1 TCP Services (2/5) Stream Delivery Service (1/3)
2018-Fall Computer Networks Networking Laboratory 97/163
[Figure 3.42 Sending and receiving buffers]
Sending and Receiving Buffers
The sending and the receiving processes may not be at the same rate
One way to implement a buffer is to use a circular array of 1-byte locations
The buffers may not be as the same size
3.4.1 TCP Services (3/5) Stream Delivery Service (2/3)
2018-Fall Computer Networks Networking Laboratory 98/163
3.4.1 TCP Services (4/5) Stream Delivery Service (3/3)
[Figure 3.43 TCP segments]
Segments
TCP groups a number of bytes together into a packet called a segment
TCP adds a header to each segment (for control purposes)
Segments may be received out of order, lost, or corrupted and resent
2018-Fall Computer Networks Networking Laboratory 99/163
3.4.1 TCP Services (5/5) Full-Duplex Communication
Data can flow in both directions at the same time
Multiplexing and Demultiplexing
Multiplexing at the sender and demultiplexing at the receiver
Connection-Oriented Service
The two TCP’s establish a logical connection between two processes
Data are exchanged in both directions
The connection is terminated
Reliable Service
Uses an acknowledgment mechanism to check the safe and sound arrival of
data
2018-Fall Computer Networks Networking Laboratory 100/163
3.4.2 TCP Features (1/2) Byte Number
TCP numbers all data bytes (octets) that are transmitted in a connection
Sequence Number
TCP assigns a sequence number to each segment that is being sent
Defines the number assigned to the first data byte contained in that segment
The numbering starts with the initial sequence number (ISN), which is an
arbitrarily generated number between 0 and 232 − 1
Acknowledgement Number
Each party uses an acknowledgment number to confirm the bytes it has
received
Defines the number of the next byte that the party expects to receive
2018-Fall Computer Networks Networking Laboratory 101/163
3.4.2 TCP Features (2/2) Example 3.17
Suppose a TCP connection is transferring a file of 5,000 bytes. The first byte
is numbered 10,001. What are the sequence numbers for each segment if
data are sent in five segments, each carrying 1,000 bytes?
Solution
The following shows the sequence number for each segment:
2018-Fall Computer Networks Networking Laboratory 102/163
3.4.3 Segment [RFC 793 & 3168] (1/2)
[Figure 3.44 TCP segment format]
[Figure 3.45 Control field]
URG: Urgent pointer is valid
ACK: Acknowledgement is valid
PSH: Request for push
RST: Reset the connection
SYN: Synchronize sequence numbers
FIN: Terminate the connection
2018-Fall Computer Networks Networking Laboratory 103/163
3.4.3 Segment [RFC 793 & 3168] (2/2)
[Figure 3.46 Pseudoheader added to the TCP datagram]
The pseudoheader is the part of the header of the IP packet
For the TCP, the value for the protocol field is 6
2018-Fall Computer Networks Networking Laboratory 104/163
3.4.4 A TCP Connection (1/7) Connection Establishment (1/2)
[Figure 3.47 Connection establishment using three-way handshaking]
- rwnd: the receive window size
Make a connection: Three-way handshaking
Step 1: Client host sends TCP SYN segment to server (no data but consume one imaginary byte) Step 2: Server host receives SYN, replies with SYN+ACK segment (no data but
consume one imaginary
byte)
Step 3: Client receives SYN+ACK, replies with ACK segment (no data)
2018-Fall Computer Networks Networking Laboratory 105/163
When receiving a SYN segment from a malicious attacker with a faking
source IP address
The TCP server then sends the SYN + ACK segment to the fake client,
after allocating the necessary resources
Resources are allocated without being used
During this short period of time, the number of SYN segments is large,
the server eventually runs out of resources
Unable to accept connection requests from valid clients
The serious security problem called SYN flooding attack
One kind of denial of service attack
3.4.4 A TCP Connection (2/7) Connection Establishment (2/2)
2018-Fall Computer Networks Networking Laboratory 106/163
3.4.4 A TCP Connection (3/7) Data Transfer (1/2)
[Figure 3.48 Data transfer]
2018-Fall Computer Networks Networking Laboratory 107/163
TCP buffers the received data when they arrive and delivers them to the
receiving process when the process is ready
This type of flexibility increases the efficiency of TCP
In some occasions, the delayed delivery of data may not be acceptable
The PSH (push) flag is set to let the receiving TCP deliver data to the
receiving process as soon as they are received
In some occasions, application program needs to send urgent bytes
TCP sends a segment with the URG bit set
The urgent data at the beginning of the segment
The urgent pointer field defines the end of the urgent data
3.4.4 A TCP Connection (4/7) Data Transfer (2/2)
2018-Fall Computer Networks Networking Laboratory 108/163
3.4.4 A TCP Connection (5/7) Connection Termination (1/2)
[Figure 3.49 Connection termination using three-way handshaking]
Closing a connection : Three-way handshaking
Step 1: Client end system sends TCP FIN control segment to server (consume one imaginary
byte if no data) Step 2: Server receives FIN, replies with ACK (consume
one imaginary byte if no
data) Step 3: Client receives FIN, replies with ACK (no data)
2018-Fall Computer Networks Networking Laboratory 109/163
3.4.4 A TCP Connection (6/7) Connection Termination (2/2)
[Figure 3.50 Half-close]
Closing a connection: Half-close
Step 1 Client half-closes the
connection by sending a FIN
segment Step 2: Server accepts by
sending the ACK segment.
Data can travel from the
server and acknowledgments
can travel from the client
Step 3: After sending all data,
server sends a FIN segment,
which is acknowledged by the
client
2018-Fall Computer Networks Networking Laboratory 110/163
3.4.4 A TCP Connection (7/7) Connection Reset
TCP can reset the connection to
deny a connection request
abort an existing connection
terminate an idle connection
All of these are done with the RST (reset) flag
2018-Fall Computer Networks Networking Laboratory 111/163
3.4.5 State Transition Diagram (1/3)
[Table 3.2 States for TCP]
To keep track of all the different events happening
TCP is specified as the finite state machine (FSM)
2018-Fall Computer Networks Networking Laboratory 112/163
3.4.5 State Transition Diagram (2/3)
[Figure 3.51 State transition diagram]
2018-Fall Computer Networks Networking Laboratory 113/163
3.4.5 State Transition Diagram (3/3) Half-Close Scenario
[Figure 3.52 Transition diagram] [Figure 3.53 Time-line diagram]
2018-Fall Computer Networks Networking Laboratory 114/163
Practice Problem
Eve, the intruder, sends a user datagram to Bob, the server, using
Alice’s IP address. Can Eve, pretending to be Alice, receive the
response from Bob?
2018-Fall Computer Networks Networking Laboratory 115/163
3.4.6 Windows in TCP (1/3)
TCP uses two windows for each direction of data transfer
send window and receive window
four windows for a bidirectional communication
Assume that communication is only unidirectional
Say from client to server
The bidirectional communication can be inferred using two
unidirectional communications with piggybacking
2018-Fall Computer Networks Networking Laboratory 116/163
3.4.6 Windows in TCP (2/3) Send Window
[Figure 3.54 Send window in TCP]
Be dictated by the receiver (flow control) and the congestion in the
underlying network (congestion control)
2018-Fall Computer Networks Networking Laboratory 117/163
3.4.6 Windows in TCP (3/3) Receive Window
[Figure 3.55 Receive window in TCP]
Receive window size (rwnd)
rwnd = buffer size – number of waiting bytes to be pulled
2018-Fall Computer Networks Networking Laboratory 118/163
3.4.7 Flow Control (1/7) Flow control balances the rate a producer creates data with the rate a
consumer can use the data
assume that the logical channel is error-free
[Figure 3.56 Data flow and flow control feedbacks in TCP]
2018-Fall Computer Networks Networking Laboratory 119/163
3.4.7 Flow Control (2/7) Opening and Closing Window (1/3)
TCP forces the sender and the receiver to adjust their window
The receive window
closes (moves its left wall to the right) when more bytes arrive from the
sender
opens (moves its right wall to the right) when more bytes are pulled by the
process
2018-Fall Computer Networks Networking Laboratory 120/163
3.4.7 Flow Control (3/7) Opening and Closing Window (2/3)
The send window (is controlled by the receiver)
closes (moves its left wall to the right) when a new acknowledgment allows
it to do so
new ackNo + new rwnd = last ackNo + last rwnd
opens (its right wall moves to the right) when the receive window size (rwnd)
advertised by the receiver allows it to do so
new ackNo + new rwnd > last ackNo + last rwnd
2018-Fall Computer Networks Networking Laboratory 121/163
3.4.7 Flow Control (4/7) Opening and Closing Window (3/3)
[Figure 3.57 An example of flow control]
Consider unidirectional
communication from
client to server
Only one window at
each side is shown
Ignore error control
No segment is
corrupted, lost,
duplicated, or has
arrived out of order
2018-Fall Computer Networks Networking Laboratory 122/163
3.4.7 Flow Control (5/7) Shrinking of Window
[Figure 3.58 Example 3.18]
The receive window cannot shrink
The send window can shrink if the receiver defines a value for rwnd that
results in shrinking the window
new ackNo + new rwnd < last ackNo + last rwnd
Some implementations
do not allow shrinking
of the send window
Receiver postpones its
feedback until enough
buffer
Receiver can temporarily
shut down the window
by sending a rwnd of 0
Sent, but outside t
he window
2018-Fall Computer Networks Networking Laboratory 123/163
3.4.7 Flow Control (6/7) Silly Window Syndrome (1/2)
When the sending data is very small, the overhead of the headers
becomes significant silly window syndrome
Sender creates syndrome if it is serving an application that
creates data slowly
TCP must be forced to wait and collect data to send in a larger block
Nagle’s algorithm (for the sending TCP)
Step 1: Send the first piece of data it receives from the sending application even
if it is only 1 byte
Step 2: Accumulate data in the output buffer and wait until either the receiving
TCP sends an acknowledgment or until sending data has filled maximum-size
segment
Step 3: Step 2 is repeated for the rest of the transmission
2018-Fall Computer Networks Networking Laboratory 124/163
3.4.7 Flow Control (7/7) Silly Window Syndrome (2/2)
Receiver creates syndrome if it is serving an application that
consumes data slowly
Two solutions have been proposed for the receiving TCP
Clark’s algorithm:
Send acknowledgment as soon as the data arrive
Announce a window size of zero until either there is enough space to
accommodate a segment of maximum size or until at least half of the receive
buffer is empty
Second solution:
Delay sending the acknowledgment
Wait until there is a decent amount of space in its incoming buffer
The acknowledgment should not be delayed by more than 500 ms
2018-Fall Computer Networks Networking Laboratory 125/163
3.4.8 Error Control (1/13)
TCP provides reliability using error control
Checksum is used to check for a corrupted segment
We discuss checksum calculation in Chapter 5
Acknowledgment is used to confirm the receipt of data segments
ACK segments do not consume sequence numbers
Retransmission is the heart of the error control mechanism
When the retransmission timer expires
When the sender receives three duplicate ACKs
Out-of-order segments is postponed to guarantee that data are
delivered to the process in order
They are stored temporarily until the missing segments arrive
FSMs for Data Transfer in TCP
2018-Fall Computer Networks Networking Laboratory 126/163
3.4.8 Error Control (2/13) Acknowledgment (1/2)
Cumulative Acknowledgment (ACK)
This is sometimes referred to as positive cumulative acknowledgment
The receiver advertises the next byte it expects to receive
No feedback is provided for discarded, lost, or duplicate segments
ACK field in the TCP header is valid only when the ACK flag bit is set to 1
Selective Acknowledgment (SACK)
Does not replace an ACK, but reports additional information
block of bytes that is out of order
block of bytes that is duplicated
SACK is implemented as an option at the end of the TCP header
2018-Fall Computer Networks Networking Laboratory 127/163
3.4.8 Error Control (3/13) Acknowledgment (2/2)
Several rules of generating ACK have been defined and used by several
implementations
Sender must include (piggyback) an ACK in its data segment (Rule 1)
The receiver needs to delay sending an ACK segment if there is only one
outstanding in-order segment (Rule 2)
until another segment arrives or until a period of time (normally 500 ms) has passed
The receiver immediately sends an ACK segment to announce the next
sequence number expected when
A segment arrives with a sequence number that is expected by the receiver, and
the previous in-order segment has not been acknowledged (Rule 3)
A segment arrives with an out-of-order sequence number that is higher than
expected (Rule 4)
A missing segment arrives (Rule 5)
A duplicate segment arrives (Rule 6)
2018-Fall Computer Networks Networking Laboratory 128/163
3.4.8 Error Control (4/13) Retransmission
Retransmission after RTO
The sending TCP maintains one retransmission time-out (RTO) for each
connection
When the timer matures, i.e. times out, TCP resends the segment in the front
of the queue and restarts the timer
RTO is dynamic in TCP and is updated based on the round-trip time (RTT) of
segments.
RTT is the time needed for a segment to reach a destination and for an
acknowledgment to be received
Retransmission after Three Duplicate ACK Segments
If three duplicate acknowledgments arrive for a segment, the next segment is
retransmitted without waiting for the time-out
This feature is called fast retransmission
2018-Fall Computer Networks Networking Laboratory 129/163
3.4.8 Error Control (5/13) FSMs for Data Transfer in TCP (1/2)
Sender-side FSM
[Figure 3.59 Simplified FSM for the TCP sender side]
2018-Fall Computer Networks Networking Laboratory 130/163
3.4.8 Error Control (6/13) FSMs for Data Transfer in TCP (2/2)
Receiver-side FSM
[Figure 3.60 Simplified FSM for the TCP receiver side]
2018-Fall Computer Networks Networking Laboratory 131/163
3.4.8 Error Control (7/13) Some Scenarios (1/7)
Normal operation
Bidirectional data transfer between two systems
[Figure 3.61 Normal operation]
2018-Fall Computer Networks Networking Laboratory 132/163
3.4.8 Error Control (8/13) Some Scenarios (2/7)
Lost segment
A segment is lost or corrupted in unidirectional data transfer
[Figure 3.62 Lost segment]
2018-Fall Computer Networks Networking Laboratory 133/163
3.4.8 Error Control (9/13) Some Scenarios (3/7)
Fast retrasmisstion
RTO has a larger value
[Figure 3.63 Fast retransmission]
2018-Fall Computer Networks Networking Laboratory 134/163
3.4.8 Error Control (10/13) Some Scenarios (4/7)
Delayed segment & Duplicate segment
TCP segment may reach the final destination through a different route with a
different delay
Hence TCP segments may be delayed
Delayed segments sometimes may time out and resent
The delayed segment arrives after it has been resent, it is considered a
duplicate segment
The duplicate segment is discarded, and an ACK is sent with ackNo
defining the expected segment
2018-Fall Computer Networks Networking Laboratory 135/163
3.4.8 Error Control (11/13) Some Scenarios (5/7)
Automatically corrected lost ACK
A key advantage of using cumulative acknowledgments
[Figure 3.64 Lost acknowledgment]
Rule 2
Rule 3
2018-Fall Computer Networks Networking Laboratory 136/163
3.4.8 Error Control (12/13) Some Scenarios (6/7)
Lost acknowledgment corrected by resending a segment
The correction is triggered by the RTO timer
A duplicate segment is the result
[Figure 3.65 Lost acknowledgment corrected by resending a segment]
2018-Fall Computer Networks Networking Laboratory 137/163
3.4.8 Error Control (13/13) Some Scenarios (7/7)
Deadlock created by lost acknowledgment
A receiver sends an acknowledgment with rwnd set to 0
The sender shut down its window temporarily
After a while, the receiver wants to remove the restriction; however, if it has
no data to send, it sends an ACK segment with a nonzero rwnd
A problem arises if this acknowledgment is lost
The sender is waiting for an acknowledgment that announces the nonzero rwnd
The receiver thinks that the sender has received this and is waiting for data
This situation is called a deadlock; each end is waiting for a response
from the other end and nothing is happening
2018-Fall Computer Networks Networking Laboratory 138/163
3.4.9 TCP Congestion Control (1/12) : Congestion Window
There is no congestion at the other end, but there may be congestion in
the middle
Receiving window (rwnd) can only guarantee that the receive TCP is never
overflowed with the received bytes
TCP uses another variable called a congestion window, cwnd
whose size is controlled by the congestion situation in the network
The cwnd variable and the rwnd variable together define the size of the
send window in TCP
Actual window size = minimum (rwnd, cwnd)
2018-Fall Computer Networks Networking Laboratory 139/163
3.4.9 TCP Congestion Control (2/12) Congestion Detection
The TCP sender uses the occurrence of two events as signs of
congestion in the network
time-out
receiving three duplicate ACKs
The lack of regular, timely receipt of ACKs, which results in a time-out
is the sign of a strong congestion
the corresponding segments are lost and the loss is due to congestion
The receiving of three duplicate ACKs (four ACKs with the same
acknowledgment number)
is the sign of a weak congestion
The network is either slightly congested or has recovered from the
congestion
2018-Fall Computer Networks Networking Laboratory 140/163
3.4.9 TCP Congestion Control (3/12) Congestion Policies (1/2)
Slow Start: Exponential Increase
The size of the congestion window (cwnd) starts with
one maximum segment size (MSS)
MSS is negotiated during the connection establishment
If an ACK arrives, cwnd = cwnd + 1
if two segments are acknowledged
cumulatively, the size of the cwnd
increases by only 1, not 2
The algorithm starts slowly, but
grows exponentially until it reaches
a threshold, named ssthresh
The growth rate is exponential in
terms of each round-trip time
[Figure 3.66 Slow start, exponential increase]
2018-Fall Computer Networks Networking Laboratory 141/163
3.4.9 TCP Congestion Control (4/12) Congestion Policies (2/2)
[Figure 3.67 Congestion avoidance, additive increase]
Congestion Avoidance: Additive Increase
Start when cwnd ≥ ssthresh
slow down the exponential growth
If an ACK arrives, cwnd = cwnd + (1/cwnd)
All segments in the previous “window”
should be acknowledged to increase
the window 1 MSS bytes
A window is the number of segments
transmitted during RTT
The size of the congestion
window increases additively
until congestion is detected
The growth rate is linear in terms
of each round-trip time
2018-Fall Computer Networks Networking Laboratory 142/163
3.4.9 TCP Congestion Control (5/12) Fast Recovery
The fast-recovery algorithm is optional in TCP
The new versions of TCP try to use it
It starts when three duplicate ACKs arrives
Light congestion in the network
If a duplicate ACK arrives, cwnd = cwnd + (1/cwnd)
The algorithm is also an exponential increase
Three duplicate ACKs trigger the use of this algorithm
2018-Fall Computer Networks Networking Laboratory 143/163
3.4.9 TCP Congestion Control (6/12) Policy Transition – Taho TCP (1/2)
[Figure 3.68 FSM for Taho TCP]
The early TCP used only two different algorithms in their congestion
policy
3 dupACKs: Three duplicate ACKs arrived
ssthresh: Slow start threshold
2018-Fall Computer Networks Networking Laboratory 144/163
3.4.9 TCP Congestion Control (7/12) Policy Transition – Taho TCP (2/2)
[Figure 3.69 Example of Taho TCP]
2018-Fall Computer Networks Networking Laboratory 145/163
3.4.9 TCP Congestion Control (8/12) Policy Transition – Reno TCP (1/2)
[Figure 3.70 FSM for Reno TCP]
The newer TCP used three different algorithms in their congestion policy
2018-Fall Computer Networks Networking Laboratory 146/163
3.4.9 TCP Congestion Control (9/12) Policy Transition – Reno TCP (2/2)
[Figure 3.71 Example of a Reno TCP]
2018-Fall Computer Networks Networking Laboratory 147/163
3.4.9 TCP Congestion Control (10/12) Policy Transition – NewReno TCP
The later TCP made an extra optimization on the Reno TCP
TCP checks to see if more than one segment is lost in the current
window when three duplicate ACKs arrive
When TCP receives three duplicate ACKs, it retransmits the lost segment
until a new ACK (not duplicate) arrives
If the ACK number defines a position between the retransmitted segment and
the end of the window, it is possible that the segment defined by the ACK is
also lost
NewReno TCP retransmits this segment to avoid receiving more and more
duplicate ACKs for it.
2018-Fall Computer Networks Networking Laboratory 148/163
3.4.9 TCP Congestion Control (11/12) Additive Increase, Multiplicative Decrease
[Figure 3.72 Additive increase, multiplicative decrease (AIMD)]
The congestion window size, after it passes the initial slow start state,
follows a saw tooth pattern called additive increase, multiplicative
decrease (AIMD)
When an ACK arrives cwnd = cwnd + (1/cwnd)
When congestion
is detected
cwnd = cwnd / 2
2018-Fall Computer Networks Networking Laboratory 149/163
3.4.9 TCP Congestion Control (12/12) TCP Throughput
The TCP throughput can be calculated as
: is the average of window sizes when the congestion occurs
Example 3.21
If MSS = 10KB (kilobytes) and RTT = 100ms in Fig 3.72, we can calculate
the throughput as shown below
max(0.75) /Throughput W RTT
maxW
𝑊𝑚𝑎𝑥 = (10 + 12 + 10 + 8 + 8)/5 = 9.6 MSS
𝑇ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡 = (0.75 𝑊𝑚𝑎𝑥/𝑅𝑇𝑇) = 0.75 × 96𝐾𝐵/100𝑚𝑠 = 720𝐾𝐵𝑝𝑠 = 5.625𝑀𝑏𝑝𝑠
2018-Fall Computer Networks Networking Laboratory 150/163
3.4.10 TCP Timers (1/12)
To perform their operations smoothly, most TCP
implementations use at least four timers:
Retransmission timer
To handle the retransmission time-out (RTO)
Persistence timer
To deal with a zero-window-size advertisement to avoid deadlock
Keepalive timer
To prevent a long idle connection between two TCP’s
TIME-WAIT timer
To be used during connection termination
2018-Fall Computer Networks Networking Laboratory 151/163
3.4.10 TCP Timers (2/12) Retransmission Timer (1/7)
We can define the following rules for the retransmission timer:
When TCP sends the segment in front of the sending queue, it starts the
timer.
When the timer expires, TCP resends the first segment in front of the queue,
and restarts the timer.
When a segment or segments are cumulatively acknowledged, the segment
or segments are purged from the queue.
If the queue is empty, TCP stops the timer; otherwise, TCP restarts the timer
2018-Fall Computer Networks Networking Laboratory 152/163
3.4.10 TCP Timers (3/12) Retransmission Timer (2/7)
To calculate the Retransmission Time-Out (RTO), we first need to
calculate the Round-Trip Time (RTT)
Calculating RTT in TCP is an involved process
Measured RTT
Smoothed RTT
RTT Deviation
2018-Fall Computer Networks Networking Laboratory 153/163
Measured RTT (RTTM)
The time required for the segment to reach the destination and be
acknowledged
The acknowledgment may include other segments
Likely to change for each round trip
In TCP, there can be only one RTT measurement in progress at any time
Smoothed RTT (RTTS)
A weighted average of RTTM and the previous RTTS
The value of is implementation-dependent
Typical value: = 0.125
3.4.10 TCP Timers (4/12) Retransmission Timer (3/7)
2018-Fall Computer Networks Networking Laboratory 154/163
3.4.10 TCP Timers (5/12) Retransmission Timer (4/7)
RTT deviation (RTTD)
The deviation is calculated based on RTTS and RTTM
The value of β is implementation-dependent
The value of β is usually set to 0.25
Retransmission Time-Out (RTO)
Based on the smoothed roundtrip time and its deviation
2018-Fall Computer Networks Networking Laboratory 155/163
3.4.10 TCP Timers (6/12) Retransmission Timer (5/7)
Example 3.22
[Figure 3.73 Example 3.22]
2018-Fall Computer Networks Networking Laboratory 156/163
3.4.10 TCP Timers (7/12) Retransmission Timer (6/7)
Suppose that a segment is not acknowledged during the RTO period
and is therefore retransmitted
TCP does not know if the ACK is for the original segment or for the retransmitted one
How can determine the current RTT?
Karn’s algorithm: Do not update the value of RTT until you send a
segment and receive an ACK without the need for retransmission.
The value of RTO is doubled for each retransmission (exponential
backoff strategy)
sender receiver
RTT?
RTO
LOST
ACK
sender receiver
RTT?
RTO
ACK
2018-Fall Computer Networks Networking Laboratory 157/163
3.4.10 TCP Timers (8/12) Retransmission Timer (7/7)
Example 3.23
[Figure 3.74 Example 3.23]
2018-Fall Computer Networks Networking Laboratory 158/163
3.4.10 TCP Timers (9/12) Persistence Timer
To correct the deadlock which can be created by lost acknowledgment
When the sending TCP receives an acknowledgment with a window size of
zero, it starts a persistence timer
The value of the persistence timer is set to the value of the RTO
When the persistence timer goes off, the sending TCP sends a special
segment called a probe
This segment contains only 1 byte of new data, but it is ignored in calculating the
sequence number for the rest of the data, and its sequence number is never
acknowledged
2018-Fall Computer Networks Networking Laboratory 159/163
3.4.10 TCP Timers (10/12) Persistence Timer
To correct the deadlock which can be created by lost acknowledgment
The probe causes the receiving TCP to resend the acknowledgment
When the persistence timer goes off again, the sending TCP sends another
probe, doubled and reset the persistence timer
Until the value reaches a threshold (usually 60 s)
After that the sender sends one probe segment every 60 seconds until the
window is reopened
2018-Fall Computer Networks Networking Laboratory 160/163
3.4.10 TCP Timers (11/12) Keepalive Timer
To remedy a situation that client becomes silent after opening a TCP
connection to a server
In this case, the connection remains open forever
Most implementations equip a server with a keepalive timer
Each time the server hears from a client, it resets this timer
The time-out is usually 2 hours
If the server does not hear from the client after 2 hours, it sends a probe
segment
If there is no response after 10 probes, each of which is 75 seconds apart, it
assumes that the client is down and terminates the connection
2018-Fall Computer Networks Networking Laboratory 161/163
3.4.10 TCP Timers (12/12) TIME-WAIT Timer
The maximum segment life time (MSL) is the amount of time any
segment can exist in a network before being discarded
Until the value reaches a threshold (usually 60 s)
Common values are 30 seconds, 1 minute, or even 2 minutes
The TIME-WAIT (2MSL) timer is used when TCP performs an active
close and sends the final ACK (refer to Figure 3.53)
The connection must stay for 2MSL amount of time to allow TCP resend the
final ACK in case the ACK is lost
This requires that the RTO timer at the other end times out and new FIN and
ACK segments are resent
2018-Fall Computer Networks Networking Laboratory 162/163
Video Content
► A comparison between UDP and TCP
► When is it appropriate to use UDP and/or TCP?
► Link: https://www.youtube.com/watch?v=Vdc8TCESIg8
UDP vs TCP
2018-Fall Computer Networks Networking Laboratory 163/163
UDP vs TCP