modeling and emulation of internet paths

41
1 Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon Duerig, Robert Ricci, Jay Lepreau University of Utah

Upload: cullen

Post on 05-Feb-2016

39 views

Category:

Documents


0 download

DESCRIPTION

Modeling and Emulation of Internet Paths. Pramod Sanaga, Jonathon Duerig , Robert Ricci, Jay Lepreau University of Utah. Distributed Systems. How will you evaluate your distributed system? DHT P2P Content Distribution. Network Emulation. Why Emulation?. Test distributed systems - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Modeling and Emulation of Internet Paths

1

Modeling and Emulation of Internet Paths

Pramod Sanaga, Jonathon Duerig,Robert Ricci, Jay Lepreau

University of Utah

Page 2: Modeling and Emulation of Internet Paths

2

Distributed Systems

• How will you evaluate your distributed system?– DHT– P2P– Content Distribution

Page 3: Modeling and Emulation of Internet Paths

3

Network Emulation

Page 4: Modeling and Emulation of Internet Paths

4

Why Emulation?

• Test distributed systems+ Repeatable+ Real

• PCs, Applications, Protocols

+ Controlled• Dedicated Nodes, Network Parameters

• Link Emulation

Page 5: Modeling and Emulation of Internet Paths

5

Goal: Path Emulation

EmulatedInternet

Page 6: Modeling and Emulation of Internet Paths

6

Link by Link Emulation

10 Mbps50 Mbps

100 Mbps 50 Mbps

Page 7: Modeling and Emulation of Internet Paths

7

End to End Emulation

RTTABW

Page 8: Modeling and Emulation of Internet Paths

8

Contributions

• Principles for path emulation– Pick appropriate queue sizes– Separate capacity from ABW– Model reactivity as a function of flows– Model shared bottlenecks

Page 9: Modeling and Emulation of Internet Paths

9

Obvious Solution

RTTABW

App App

Shape DelayQueue

Link Emulator

ShapeDelay Queue

Page 10: Modeling and Emulation of Internet Paths

10

Obvious Solution (Good News)

• Actual Path• Bandwidth Accuracy

– 8.0% Error on forward path– 7.2% Error on reverse path

2.3 Mbps

2.2 MbpsIperf Iperf

Page 11: Modeling and Emulation of Internet Paths

11

Obvious Solution (Bad News)

Real Path Obvious Emulation

Time (s) Time (s)

RT

T (

ms)

RT

T (

ms)

Latency is an order of magnitude higher

Page 12: Modeling and Emulation of Internet Paths

12

Obvious Solution (More Problems)

• Measure asymmetric path• Bandwidth Accuracy

– 50.6% error on forward path!• Much smaller than on real path

– 8.5% error on reverse path

6.4 Mbps

2.6 MbpsIperf Iperf

Page 13: Modeling and Emulation of Internet Paths

13

What’s Happening?

• TCP increases congestion window until it sees a loss

• There are no losses until the queue fills up

• Queue fills up

• Delays grows until queue is full

• Large delays were queuing delays

Page 14: Modeling and Emulation of Internet Paths

14

What’s Happening

App App

6.4 Mbps 12 ms50 Packet

Link Emulator

2.6 Mbps12 ms 50 Packet

Page 15: Modeling and Emulation of Internet Paths

15

Delays

App App

Link Emulator

97 ms

239 ms

How do we make this more rigorous?

Page 16: Modeling and Emulation of Internet Paths

16

Maximum Tolerable Queueing Delay

baseRTTABW

wRTT max

max

Max Tolerable Queueing DelayMax Window SizeTarget Bandwidth Other RTTQueue size must be limited above by delay

Page 17: Modeling and Emulation of Internet Paths

17

Upper Bound (Queue Size)

maxRTTCqq rf Max Tolerable DelayTotal Queue Size Capacity

Upper limit is proportional to capacity

Page 18: Modeling and Emulation of Internet Paths

18

Upper bound

• Forward direction 13k– 9 packets

• Reverse direction 5k– 3 packets

• Queue size is too small– Drops even small bursts!

Page 19: Modeling and Emulation of Internet Paths

19

Lower Bound (Queue Size)

Ff

fwq

Total Window SizeSmall Queues Drop Packets On Bursts

Page 20: Modeling and Emulation of Internet Paths

20

Can’t Fulfill Both Bounds

• Upper limit is 13k

• Lower limit is 65k– 1 TCP connection, no window scaling

• No viable queue size

Page 21: Modeling and Emulation of Internet Paths

21

Capacity Vs. ABW

• Capacity is the rate at which everyone’s packets drain from the queue

• ABW is the rate at which MY packets drain from the queue

• Link emulator replicates capacity

• Setting capacity == ABW interacts with queue size

Page 22: Modeling and Emulation of Internet Paths

22

Capacity and Queue Size

Que

ue S

ize

(KB

)

Capacity (Mbps)

Viable Queue SizesUpper BoundLower Bound

Page 23: Modeling and Emulation of Internet Paths

23

Our Solution

• Set queue based on constraints• Set shaper to high bandwidth• Introduce CBR cross-traffic

App App

Shape DelayQueue

PathEmulator

ShapeDelay Queue

CBRSource

CBRSource

CBRSink

CBRSink

Page 24: Modeling and Emulation of Internet Paths

24

CBR Traffic

• TCP cross-traffic backs off

• CBR does not

• Background traffic cannot back off– If it does, the user will see larger ABW than

they set

Page 25: Modeling and Emulation of Internet Paths

25

Reactivity

• Reactive CBR traffic?!?!?!• Approximate aggregate ABW as function of

number of foreground flows• Change CBR traffic based on flow count

)( FfABW

Page 26: Modeling and Emulation of Internet Paths

26

Does it work?

Page 27: Modeling and Emulation of Internet Paths

27

Testing Bandwidth

6.4 Mbps

2.6 MbpsIperf Iperf

Obvious Error Our Error

Forward 50.6 % 4.1 %

Reverse 8.5 % 5.0 %

Page 28: Modeling and Emulation of Internet Paths

28

More Bandwidth Tests

Forward Reverse Link Error Path Error

2.3 Mbps 2.2 Mbps 8.0 % 2.1 %

4.1 Mbps 2.8 Mbps 31.7 % 5.8 %

6.4 Mbps 2.6 Mbps 50.6 % 4.1 %

25.9 Mbps 17.2 Mbps 20.4 % 10.2 %

8.0 Mbps 8.0 Mbps 22.0 % 6.3 %

12.0 Mbps 12.0 Mbps 21.5 % 6.5 %

10.0 Mbps 3.0 Mbps 66.5 % 8.5 %

Page 29: Modeling and Emulation of Internet Paths

29

Testing Delay

Real Path Path Emulator

Time (s) Time (s)

RT

T (

ms)

RT

T (

ms)

Obvious solution was an order of magnitude higher

Page 30: Modeling and Emulation of Internet Paths

30

BitTorrent Setup

• Measured conditions among 13 PlanetLab hosts

• 12 BitTorrent Clients, 1 Seed

• Isolate capacity and queue size changes

Page 31: Modeling and Emulation of Internet Paths

31

BitTorrent

Nodes

Dow

nloa

d D

urat

ion

(s)

Obvious Solution

Our Solution

Page 32: Modeling and Emulation of Internet Paths

32

Related Work

• Link Emulation– Emulab– Dummynet– ModelNet– NIST Net

Page 33: Modeling and Emulation of Internet Paths

33

Related Work

• Queue Sizes– Apenzeller et al (Sigcomm 2004)

• Large number of flows– Buffer requirements are small

• Small number of flows– Queue size should be bandwidth-delay product– We build on this work to determine our lower bound

– We focus on emulating a given bandwidth rather than maximizing performance

Page 34: Modeling and Emulation of Internet Paths

34

Related Work

• Characterize traffic through a particular link– Harpoon (IMC 2004)– Swing (Sigcomm 2006)– Tmix (CCR 2006)

• We use only end to end measurements and characterize reactivity as a function of flows

Page 35: Modeling and Emulation of Internet Paths

35

Conclusion

• New path emulator– End to end conditions

• Four principles combine for accuracy– Pick appropriate queue sizes– Separate capacity from ABW– Model reactivity as a function of flows– Model shared bottlenecks

Page 36: Modeling and Emulation of Internet Paths

36

Questions?

• Available now at www.emulab.net

• Email: [email protected]

Page 37: Modeling and Emulation of Internet Paths

37

Backup Slides

Page 38: Modeling and Emulation of Internet Paths

38

Does capacity matter?

Page 39: Modeling and Emulation of Internet Paths

39

Scale

Page 40: Modeling and Emulation of Internet Paths

40

Shared Bottlenecks are Hard

Page 41: Modeling and Emulation of Internet Paths

41

Stationarity