1 © 2008 nokia continous_scheduling_fmn_2008 / 2008-09-14 / jak continuous scheduling for...

21
1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09- 14 / JAk Continuous Scheduling for Data- Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer team, Internet Laboratory Nokia Research Center, Helsinki, Finland

Upload: jonas-howard

Post on 27-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Continuous Scheduling for Data-Driven Peer-to-Peer Streaming

Jyrki Akkanen

Peer-to-peer team, Internet Laboratory

Nokia Research Center, Helsinki, Finland

Page 2: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

2 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Agenda

• Data-driven (pull-based mesh) P2P streaming

• Continuous data-driven P2P streaming

• Early experience results

Page 3: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

3 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Our original motivation

• Desired practical P2P streaming solution• For Symbian/S60 mobile phones

• No “perfect” solution exists yet• Desired to improve joining time

• On-going work, results are preliminary

Page 4: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

4 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Peer-to-peer streaming: two basic questions

• How to manage the overlay topology?• Centralized tracker

• Random topology, 4-8 neighbors

• How to disseminate data?• Data-driven / pull-based mesh

• Well-known, much used in practice

Page 5: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

5 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Data-driven (pull-based) stream delivery

• Mesh topology• Each node has a small set of partners

• Stream split in pieces

• Nodes collect pieces in buffer• Window of interest / Request window in the

front

• Playing from the back

• Notify – Request – Response pattern• Buffer content advertised to partners

• Scheduling: what to fetch, when and where

• Requests sent to partners

• Partners return pieces one by one

• Periodic operation• 1-2 second scheduling/request period

Notify Request Send

source

Window of Interest

JitterWindow

Page 6: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

6 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Challenges of periodic operation

• Buffer lag• Advertising+scheduling delay 1-2 sec

• Receiving node lags sending partner

• Partners who lag cannot contribute

• Low reacton time, slow re-transmit• Wait for the next scheduling round

• Consequences• Long end-to-end delay

• Long window of interest (30 sec or more)

• Hard to find partners that don’t lag and have enough capacity

• Low delivery ratio

• Can we run it faster?• Better gain, shorter delay

• Increases overhead

Same timing,

exchange few pieces

Different timing,

unidirectional traffic

Page 7: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

7 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Continuous Data-driven Streaming Protocol

• Speed up the basic data-driven protocol • Up to point where it no more operates

periodically

• Small piece size• 1250 Bytes = single IP packet

• Narrow window of interest• 256 packets = 8.5 sec at 200 kbps

• Send incremental notifications continuously• As soon as possible after you get a piece

• Scheduler runs continuously• Notifications fed in one by one whenever they

arrive

• Maintains plan to fetch pieces from partners

• Requests sent continuously• One by one at “last possible” moment

• Requests are waiting in the scheduler

Scheduler

Page 8: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

8 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Communication over UDP

• Rare selection: TCP is usually preferred

• Better timing, improved reaction speed

• Control on re-transmission

• AIMD-based rate control; could also use TFRC

• May lead to difficulties with firewalls etc.

Page 9: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

9 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Fighting against overhead

• Need to send lots of small amounts of control data all the time

• Piggybacking• Control data piggybacked to stream data whenever possible

• Ability to mutually exchange data also helps• Less need to send “control” packets without stream data

• In practice• Rate control regularly allows transmission opportunities to each link

• On each opportunity, assemble a packet from what you need to send

• Incremental notifications sent in small groups (1-10)

Page 10: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

10 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Source fan-out policy• Mandatory process to create initial packet distribution at first hop

peers

• Selective advertising

• Simple fan-out policy• Decreased density of packets

• Randomly selected packets

Advertised Window of Interest

0%

20%

40%

60%

80%

100%

0 8 16 24 32 40

Page 11: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

11 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Simple scheduler

• Packet is scheduled or re-scheduled whenever• notification is received

• request is lost

• Maintain a request queue per each partner• holds assigned requests in randomized order

• sorted by seqno + X where X is uniformly random

• insert request in shortest queue

• Pick requests from the queues when needed• window-based flow control

• minimize the number of outstanding requests

• Scheduling policy concerns bandwidth constrained peers only

• others have always almost empty queues

• prefers rare and urgent packets

• virtual time sorting increases diversity

randomize order

select shortest queue

pick request

re-notified

lost

Page 12: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

12 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Implementation status

• A portable C++ implementation

• Simulation in Linux

• Demonstration with Linux PCs

• Demonstration with Symbian phones

Page 13: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

13 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Simulations in Linux PC

• Simple network simulator• centralized router with tail drop queues

• 500 kbps links with 50 ms latency

• Artificial media stream• 200 kbps, 20 packets/sec

• Simple dynamic scenario• random overlay topology, 4 partners / node

• initially 100 nodes

• after 120 seconds drop half of the nodes

• remaining peers re-connect so that all have 4 partners again

Page 14: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

14 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

End-to-end delay, simulated

0

2

4

6

8

10

60 90 120 150 180Time (sec)

Del

ay (s

ec)

1 hop3 hops5 hops

No frames are lost

50% of peers lost at 120 sec

Page 15: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

15 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Recent simulation results (not in paper)

95%

96%

97%

98%

99%

100%

0 5 10 15End-to-end latency (s)

Del

iver

y ra

tio

150 nodes, stable

50 nodes, 60 jpm

100 nodes, 60 jpm

150 nodes, 180 jpm

300 nodes, 180 jpm0%

20%

40%

60%

80%

100%

0 2 4 6 8 10Time to play (s)

Peer

s

50 nodes, 60 jpm

100 nodes, 60 jpm

150 nodes, 180 jpm

300 nodes, 180 jpm

• Dynamic scenarios with Poisson join/leave process

• Exponentially distributed random dwelling time 50 or 100 sec

• Each newcomer joined to 6 randomly selected peers

Page 16: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

16 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Live video streaming

• Live network of 4 Nokia N93i smart phones

• Ad-hoc WLAN network

• Video from phone camera• H.263 profile 0 level 20

• 352 x 288 pixels, 16M colors, 15 fps

• variable data rate 64 – 315 kbps

• Other three phones viewed the video• Player learned optimal playback point

• End-to-end latency was 10 sec

• No frames lost

• Full mesh overlay topology

Page 17: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

17 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Video rate at source, live

0

100

200

300

400

500

46:00 49:00 52:00 55:00 58:00Time (min:sec)

Rat

e (k

bps)

Page 18: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

18 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Window of interest and player jitter

0

50

100

150

200

250300

350

43:00 46:00 49:00 52:00 55:00 58:00Time (min:sec)

Buf

fer o

ffse

t (pa

cket

s)

endmissplay

Window of Interest

Jitter Window

Page 19: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

19 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Network overhead, live

• Due to heavy piggybacking cannot distinguish data and control packets

• Total protocol overhead 14.4%• all transmitted bytes including IP/UDP headers relative to received stream

data

• live network, 3 peers

• For comparison• periodic data-driven overlay streaming: ~10%

• unidirectional HTTP downloading: 7.2% (our measurement)

Page 20: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

20 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Total network overhead, simulated (not in paper)

IP/UDP, 5.81%

Stream, 6.40%

Tracker, 0.11%

Loss, 0.01%

Rate control, 2.91%0%

5%

10%

15%

20%

25%

4.0 5.0 6.0 7.0 8.0 9.0 10.0Average number of neighbors

Ove

rhea

d

50 nodes

100 nodes

150 nodes200 nodes

300 nodes

Page 21: 1 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk Continuous Scheduling for Data-Driven Peer-to-Peer Streaming Jyrki Akkanen Peer-to-peer

21 © 2008 Nokia continous_scheduling_fmn_2008 / 2008-09-14 / JAk

Continuous scheduling approach: summary

• Work in progress

• Not yet tested in real large-scale networks

• Overhead is largish, but not intolerable• not yet optimized

• Seems to provide• small latency

• high delivery ratio

• good resilience for dynamics

• Difficulties with bandwidth constraints• congestion close to the source

• few critical, bandwidth constrained overlay links

• Need better fan-out and scheduling policies• must diffuse new packets quickly far out to the network