multimedia traffic management on tcp/ip over atm-ubr

Post on 30-Jan-2016

31 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP OVER ATM-UBR. By Dr. ISHTIAQ AHMED CH. OVERVIEW. Introduction Problem Definition. Previous related work. Unique experimental design. - PowerPoint PPT Presentation

TRANSCRIPT

MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP

OVER ATM-UBR

ByDr. ISHTIAQ AHMED CH.

OVERVIEW

Introduction

Problem Definition.

Previous related work.

Unique experimental design. Analysis of different TCP implementations that proves

that these implementations do not utilize the available bandwidth efficiently.

Based on our analysis we proposed Dynamic Granularity Control algorithm for TCP.

Conclusions.

INTRODUCTION

Management of Multimedia

communications requires

Efficient resource management.Maximum utilization of bandwidth

allocated.Providing QoS parameters.

Tools subjected to Multimedia Communications

Among the tools for Multimedia

Communications, the ATM

Networks and TCP/IP protocol

were selected.

ATM (Asynchronous Transfer Mode)

Multi-Service Traffic Categories

- CBR (Constant Bit Rate)

- UBR (Unspecified Bit Rate)

- Promising Traffic with Quality of Service (QoS)

Academic Network & Easy to Use

High-Speed Network Technology

Features

TCP/IP

Most Widely Used Protocol in the Internet.

It has lot of research potential to meet the network communication requirements.

Source code and helping material are easily available.

PROBLEM DEFINITION

ATM switch with buffer size 3K or 4K cells

ATM switch with buffer size 1K cells or 2K cells

PROBLEM DEFINITION

Multimedia Communications suffers

from three major problems

1. ATM switch buffer overflow.

2. Loss of Protocol Data Units by the

Protocol being used.

3. Fairness among multiple TCP

connections.

My Research Problem

Research is Dealing with the two above mentioned problems such that:

Avoiding Congestion in the ATM network. Efficient utilization of bandwidth allocated. Fairness among multiple TCP connections.

Transmission Control Protocol

Different Implementations of TCP- TCP Tahoe- TCP Reno- TCP NewReno- TCP SACK

Congestion Control algorithms of TCP- Slow-Start- Congestion Avoidance- Fast Retransmit- Fast Recovery

Previous Related Work

TCP/IP over ATM Jacobson (1988)

TCP Tahoe is added with Slow-Start, Congestion Avoidance and Fast Retransmit algorithms to avoid loss of data.

Jacobson (1990) TCP Reno modified the Fast Retransmit algorithm of

TCP Tahoe and added Fast Recovery algorithm.

Gunningberg (1994) The large MTU size of ATM causes throughput

Deadlock.

Previous Related Work

TCP/IP over ATM Romanow (1995)

Cells of Large packet when lost at ATM level heavily effects TCP throughput.

This gives rise to cell discard strategies like PPD (Partial Packet Discard) and EPD (Early Packet Discard).

Larger the MTU size smaller will be the TCP Throughput Hoe (1996)

The slow-start algorithm ends up pumping too much data. Fast Retransmit algorithm may recover only one of the

packet losses.

Previous Related Work

TCP/IP over ATM Floyd (1996)

TCP Reno implementation is modified to recover multiple segment losses. The implementation is named as TCP NewReno.

Mathis (1996) The Fast Retransmit and Fast Recovery algorithms are

modified using Selective Acknowledgment Options. The new TCP version is known as TCP SACK.

Problems of TCP/IP over ATM

SUMMARY of Related Research Work

o Segment losses badly effect the throughput of TCP over congested ATM networks.

o Fast Retransmit and Fast Recovery algorithms of Reno TCP are unable to recover multiple segment losses in the same window of data.

o NewReno TCP and Linux TCP algorithms are supposed to recover these segment losses but……..

Previous Research on TCP/IP over ATM is related to:

Multiple UBR streams from different sources are contending at the same output port of the ATM switch.

Major part of the related research is based on simulated studies.

The ATM Network being Congested due to:

CBR flow, which has absolute precedence, and the TCP flow on UBR sharing the

same output port in the ATM switch.

The cell buffer size in the ATM switch for UBR meets the minimum requirement.

Unique Experimental Design

Fujitsu EA1550CBR

TCP Traffic over UBR

Netperf

Tcpdump

FreeBSD 3.2-R

FreeBSD 3.2-R

TCP Throughput Analysisdepending on 4 parameters

Cell Loss

A B

C

Traffic Generator

CBR Streams

Switch BufferMTU Size

Socket Buffer Size

Unique Experimental Design

My Research Contribution

Throughput measurement and analysis of

TCP over congested ATM under variety of

network parameters. Throughput evaluation and analysis of

several TCP implementations. Proposed a new congestion control scheme

for TCP to avoid congestion in the ATM network and to improve the throughput of TCP.

Performance Analysis of Linux

TCP

Congestion Control Algorithms of Linux TCP

o Slow-Start algorithmo Congestion Avoidanceo Fast Retransmit algorithm.o Fast Recovery algorithm.

Slow-Start Algorithm

1St segment

2nd segment

Acknowledgment (ACK)

Sender ReceiverATM switch

Congestion Avoidance Algorithm

one segment per RTT

2nd segment

after RTT Acknowledgment (ACK)

Sender ReceiverATM switch

one segment per RTT

2nd segment

after RTT Acknowledgment (ACK)

Sender ReceiverATM switch

Duplicate ACK

Fast Retransmit and Fast Recovery

Algorithms

Throughput Results

CBR Stream = 100[Mbps] Socket Buffer Size=64Kbytes

0

20

40

60

80

100

0 50 100 150 200

Eff

ecti

ve T

hrou

ghpu

t[M

bps]

UBR Switch Buffer Size[Kbytes]

RENO TCP MTU=9180bytesLinux TCP MTU=9180bytesLinux TCP MTU=1500bytesLinux TCP MTU=512bytes

Throughput ResultsSwitch Buffer Size = 53[Kbytes] Socket Buffer = 64[Kbytes] MTU=9180bytes

CBR Stream Pressure TCP Throughput over UBR

0

20

40

60

80

100

120

140

0 20 40 60 80 100 120 140

Thr

ough

put[

Mbp

s]

CBR Pressure[Mbps]

Reno TCP

Linux TCP

Segments Acknowledged ( No. of bytes)

CBR=100Mbps MTU=9180bytes Buffer Size=53Kbytes

0

2000

4000

6000

8000

10000

0 2 4 6 8 10Pac

kets

Ack

now

ledg

ed[K

byte

s]

Time[sec]

Linux TCP E.TP=7.06 Mbps

Analysis of Linux TCP

TCP throughput is less than 20% of the available bandwidth and varies 14 to 16%.

Retransmission time outs are less than Reno TCP. Linux TCP will be bad in connection sensitive

applications due to expiry of its retransmission timer. Retransmission timer expires due to the making a

decision what to send. Retransmission timeouts and FRR processes consumed

almost more than 50% of the total time. If MTU size is large then congestion occurs soon.

Proposed Dynamic Granularity Control (DGC) algorithm for TCP

A more conservative Jacobson’s congestion avoidance scheme is applied by reducing the MSS size.Step 1: Congestion Avoidance

Decrease MSS to 1460 bytes if MSS >1460 bytes.

If MSS =1460 bytes, decrease MSS to 512 bytes.

Proposed DGC Algorithm for TCP

Fast retransmit machine (FRM) consist of following stages. Fast retransmission. Fast recovery. TCP re-ordering. Segment loss.

Step 2:FRM. Reducing MSS to 512 bytes under FRM events.

Implementation of DGC algorithm

DGC is implemented on Linux kernel 2.4.0-test10 with ATM-0.78 distribution.

Sender side implementation.

Results and Discussions

0

20

40

60

80

100

120

0 50 100 150 200

Eff

ectiv

e T

hrou

ghpu

t[M

bps]

Switch Buffer Size[Kb]

DGC TCP MTU=9180 bytes

Linux TCP MTU=9180 bytes

Linux TCP MTU=512 bytes

CBR Stream = 100[Mbps] Window Size=64Kbytes

Available bandwidth

Results and Discussions

0

20

40

60

80

100

120

140

160

0 20 40 60 80 100 120 140 160

Thr

ough

put[

Mbp

s]

CBR Pressure[Mbps]

DGC TCP MTU=9180 bytesLinux TCP MTU=9180 bytes

Linux TCP MTU=512 bytes

Available bandwidth

Switch buffer size = 53[Kbytes] Window size=64Kbytes

Segments Acked (No. of Bytes)

0

10000

20000

30000

40000

50000

60000

0 2 4 6 8 10Num

ber

of S

egm

ents

Ack

ed [

Kby

tes]

Time[sec]

DGC TCP E.TP=41.71 MbpsLinux TCP E.TP=7.06 Mbps

Switch buffer size = 53[Kbytes] MTU=9180bytes

TWO UBR STREAMS without any External CBR Pressure

0

20

40

60

80

100

0 50 100 150 200

Eff

ectiv

e T

hrou

ghpu

t[M

bps]

Switch Buffer Size[K bytes]

UBR Stream1 CBR=100[Mbps]UBR Stream2 CBR=100[Mbps]

Single UBR Stream [Mbps]

Multiple Streams of Linux TCP under CBR pressure

0

20

40

60

80

100

0 50 100 150 200

Eff

ecti

ve T

hro

ugh

pu

t[M

bp

s]

Switch Buffer Size[K bytes]

Maximum Available Bandwidth [Mbps]

Linux TCP UBR1 CBR=100[Mbps]

Linux TCP UBR2 CBR=100[Mbps]

Linux TCP and DGC TCP Streams

0

20

40

60

80

100

0 50 100 150 200

Eff

ectiv

e T

hrou

ghpu

t[M

bps]

Switch Buffer Size[K bytes]

DGC TCP UBR Flow1 CBR=[100Mbps]

Linux TCP UBR Flow2 CBR=100[Mbps]

Maximum Available Bandwidth [Mbps]

DGC and Linux TCP under CBR Pressure

0

20

40

60

80

100

120

140

0 20 40 60 80 100 120 140

Thr

ough

put[

Mbp

s]

CBR Pressure[Mbps]

DGC TCP UBR Flow1Linux TCP UBR Flow2

Total Additive Throughput [Mbps]

CONCLUSIONS

Proposed TCP DGC algorithm used more than 98% of the available bandwidth.No Retransmission timeout occurs and hence synchronization effect is minimized.Fairness is thoroughly better than the other available flavors of TCP.

Final Concluding Remarks

Analysis of TCP Reno1. Slow-start algorithm pumps too much in the

network.2. If MTU size is large then throughput will be better.3. Throughput of TCP is less than 2% of the available

bandwidth during heavy congestion in the network. 4. The retransmission timeout occurs too frequently

producing TCP throughput deadlock.5. Fast Retransmit and Fast Recovery algorithms are

unable to recover multiple segment losses.

Final Concluding Remarks

Analysis of Linux TCP 1. Throughput of Linux TCP is improved as compared

to TCP Reno but still less than 20% of the available bandwidth.

2. If MTU size is large then congestion state in the network is achieved soon.

3. More than 50% of the total is consumed to recover a segment loss.

4. Retransmission timeout still occurs, therefore Linux TCP will be bad in connection sensitive applications.

Final Concluding Remarks

Analysis of proposed TCP DGC algorithm1. Almost all the available bandwidth is utilized.2. The idea is equally applicable to other

communication protocols facing congestion problem in the network.

3. DGC TCP may not be useful over the Internet in certain cases.

Future Directions

High-Speed TCP http://www.icir.org/floyd/hstcp.html

Fast TCP http://netlab.caltech.edu/FAST/

TCP Performance Tuning Page http://www.psc.edu/networking/projects/

Future Directions

Performance analysis of multiple TCP connections, fairness, and buffer requirements over Gigabit Networks.

Multi-homing Multi-streaming SCTP (Stream Control Transmission

Protocol) Performance analysis of TCP flavours

over Wireless Ad-hoc networks

Thank you very much.

top related