gossip scheduling for periodic streams in ad-hoc wsns ercan ucan, nathanael thompson, indranil gupta...

23
Gossip Scheduling for Periodic Streams in Ad-hoc WSNs Ercan Ucan, Nathanael Thompson, Indranil Gupta Department of Computer Science University of Illinois at Urbana-Champaign Distributed Protocols Research Group: http://dprg.cs.uiuc.edu

Upload: janel-pearson

Post on 18-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Gossip Scheduling for Periodic Streams in Ad-hoc WSNs

Ercan Ucan, Nathanael Thompson, Indranil Gupta

Department of Computer ScienceUniversity of Illinois at Urbana-Champaign

Distributed Protocols Research Group: http://dprg.cs.uiuc.edu

Gossip in Ad-hoc WSNs

Useful for broadcast applications: Broadcast of queries [TinyDB] Routing information spread [HHL06] Failure Detection, Topology Discovery

and Membership [LAHS05] Code propagation/Sensor

reprogramming [Trickle]

Canonical Gossip Gossip (or epidemic) = probabilistic

forwarding of broadcasts [BHOXBY99,DHGIL87]

In sensor networks: forward broadcast to neighbors with a probability p [HHL06]

Compared to flooding, gossip: Reliability is high (but probabilistic) if p > 0.7 Saves energy Has latency that is comparable

Canonical Gossip [HHL06]

GOSSIP (per stream)1. p = gossip probability2. loop:3. for each new message m do4. r = random number between 0.0 and

1.05. if (r < p) then6. broadcast message m7. sleep gossip period

Target SettingOur target setting Multiple broadcast streams, each initiated by a

separate publisher node Each stream has a fixed stream period: source

initiates updates/broadcasts periodically Different streams can have different period

Canonical gossip doesn’t work! Treats each stream individually

Overhead grows as sum of number of streams First-cut Idea: For periodic streams, combine

gossips

Piggybacking

Combine multiple streams into one Piggybacking: Create gossip message

containing latest updates from multiple streams

Basically, each node gossips a “combined” stream

Generates fewer messages and allows longer idle/sleep periods

Piggyback GossipPIGGYBACK GOSSIP1. p = gossip probability2. loop:3. for each constituent stream s4. for each new message m in s do5. r = random number between 0.0 and 1.06. if (r < p) then7. add m to piggyback gossip message b8. broadcast message b9. sleep gossip period

Gossip Scheduling

Basic piggybacking does not work if Streams have different periods Network packet payload size is finite

Solution: create a gossip schedule Determines which streams are

piggybacked/packed into which gossip messages

Runs asynchronously at each node

Static Scheduling Problem Solved centrally, and then followed by all

nodes Given a set of periodic streams, satisfy two

requirements:I. New piggyback message must not exceed

maximum network payload sizeII. Maintain reliability, scalability and latency of

canonical gossiping on individual streams create groups of streams: k constraint for each stream group, send gossip with period = min(all streams in that group) gossip contains latest updates from each stream in group

Stream Groups: k constraint

Each stream group contains <= k streams Due to:

1. Limit on size of network packet payload For TinyOS, 28 B

2. Update message sizes from streams Assume same for all streams

E.g., for 28 B payload and 5 B update message, k=5

k constraint specifies maximum number of streams in one piggyback gossip

Relatedness Metric

Relatedness metric among each pair of streams i,j with periods ti and tj:

R(i,j) = min(ti,tj)/max(ti,tj)

(note that 0 < R(i,j) <= 1.0) Two streams are related if they have a high R value

For two streams with similar stream periods, combining them maximizes the utilization of piggybackGossip message

containing Pub3 and Pub4sent every 6 sec

(k=2)

Scheduling using Relatedness

In gossip schedule, highly related streams should be combined

Yet satisfy k-constraint Express relatedness between

streams in semblance graph

Semblance Graph

Each gossip stream is a vertex in complete graph Edge weights represent relatedness R(i,j) between streams

Example Stream Workload Semblance Graph

Semblance Graph Sub-problem

Formally: partition semblance graph into groups

Each Group has size no larger than k Minimize sum of inter-group edge weights

i.e., maximize sum of intra-group edge weights Greedy construction heuristics: based

on classical minimum spanning tree algorithms

I. Prim-likeII. Kruskal-like

I. Prim-like Algorithm Scheduled set of groups S = Φ Initialize S with a single group consisting

of one randomly selected vertex Iteratively:

Among all edges from S to V-S, select maximum weight edge e

Suppose e goes from a vertex in group g (in S) to some vertex v (in V-S)

Bring v into S If |g| < k, then add v into group g Otherwise, create new group g’ in S, containing

single vertex v Time Complexity = O(V2.log(V))

II. Kruskal-like Algorithm Each node initially in its own group (size=1) Sort edges in decreasing order of weight Iteratively consider edges in that order

Try to add edge May combine two existing groups into one group May be an edge within an existing group

If adding the edge causes a group to go beyond k, drop edge

Time complexity: O(E.log(E) + E) = O(V2.log(V))

Simulated algorithms on 5000 semblance graphs Stream periods selected from interval within [0,1] of size (1-

homogeneity) Kruskal-like better on majority of inputs

For any number of streams, homogeneity (and k)

Comparison of Heuristics

Network Simulation

Canonical Gossip vs. Piggybacked Gossip

TinyOS simulatorNetwork size 225 nodes

Publishers 24

k constraint 7

Gossip probability 70%

Evaluation

Total messages sent will decrease What are the effects on

Energy consumption? Reliability? Latency?

Energy Savings

Power consumption based on mote datasheet

Gossip scheduling reduces energy consumption by 40%“Flood” = Canonical Gossip

“PgFlood”= Piggybacked Gossip-Scheduled Gossip

Reliability

Reliability reasonable up to 10% failures, and then degrades gracefully

Slightly worse than canonical gossip due to update buffering at nodes“Flood” = Canonical Gossip

“PgFlood”= Piggybacked Gossip-Scheduled Gossip

Latency Gossiping

scheduling delays delivery at some nodes

But it has lower latency for most cases, and a lower median and average latency Gossip scheduling

pushes some updates quickly

“Flood” = Canonical Gossip“PgFlood”= Piggybacked Gossip-Scheduled Gossip

Conclusion and Open Directions Canonical gossip inefficient under multiple

publishers sending out periodic broadcast streams Use Gossip scheduling to efficiently piggyback

different streams at nodes satisfies network packet size constraints retains reliability (compared to canonical gossip) improves latency lowers energy consumption

Open directions: Dynamic version: adding/deleting streams, varying

periods Distributed scheduling

Distributed Protocols Research Group: http://dprg.cs.uiuc.edu