techniques for building long-lived wireless sensor networks jeremy elson and deborah estrin ucla...
TRANSCRIPT
Techniques for Building Long-Lived Wireless Sensor Networks
Jeremy Elson and Deborah EstrinUCLA Computer Science
DepartmentAnd
USC/Information Sciences Institute
Collaborative work with R. Govindan, J. Heidemann, and
SCADDS of other grad students
What might make systems long-lived?
Consider energy the scarce system resource Minimize communication (esp. over long
distances)Computation costs much less, so:In-network processing: aggregation, summarization
Adaptivity at fine and coarse granularityMaximize lifetime of system, not individual nodesExploit redundancy; design for low duty-cycle operation
Exploit non-uniformities when you have themTiered architecture
New metrics
What might make systems long-lived?
Robustness to dynamic conditions: Make system self-configuring and self-reconfiguring Avoid manual configuration Empirical adaptation (measure and act)
Localized algorithms prevent single points of failure and help to isolate scope of faults Also crucial for scaling purposes!
The Rest of the Talk
Some of our initial building blocks for creating long-lived systems: Directed diffusion - a new data
dissemination paradigm Adaptive fidelity Use of small, randomized identifiers Tiered architecture Time synchronization
Directed DiffusionA Paradigm for Data Dissemination
Key features name data, not nodes interactions are
localized data can be aggregated
or processed within the network
network empirically adapts to best distribution path, the correct duty cycle, etc.
2. Reinforcement
1. Low data rate
3. High data rate
Diffusion: Key Results
Directed diffusion Can provide significantly
longer network lifetimes than existing schemes
Keys to achieving this: In-network aggregation Empirical adaptation to
path Localized algorithms and
adaptive fidelity There exist simple,
localized algorithms that can adapt their duty cycle
… they can increase overall network lifetime
Ave
rage
Dis
sipa
ted
Ene
rgy
(Jo
ule
s/N
od
e/R
ece
ive
d E
ven
t)
Network Size (nodes)
0
0.005
0.01
0.015
0.02
0.025
0.03
0 50 100 150 200 250 300
Diffusion without suppression
flooding
Diffusion with suppression
Omniscient multicast
Adaptivity I: Robustness in Data Diffusion
A primary goal of data diffusion is robustness throughempirical adaptation: measuring and reacting to theenvironment.
no failures
20% node failure
10% node failureBecause of this adaptation,mean latency (shown here)for data diffusiondegrades only mildlyeven with10%-20% node failure.
Adaptivity II:Adaptive Fidelity
extend system lifetime while maintaining accuracy
approach: estimate node density
needed for desired quality automatically adapt to
variations in current density due to uneven deployment or node failure
assumes dense initial deployment or additional node deployment
zzz
zzz
zzz
zzz
Adaptive Fidelity Status
applications: maintain consistent latency or bandwidth in
multihop communication maintain consistent sensor vigilance
status: probablistic neighborhood estimation for ad hoc
routing30-55% longer lifetime with 2-6sec higher initial delay
currently underway: location-aware neighborhood estimation
Small, Random Identifiers
Sensor nets have many uses for unique identifiers(packet fragmentation, reinforcement, compression codebooks...)
It’s critical to maximize usefulness of every bit transmitted; each reduces net lifetime (Pottie)
Low data rates + high dynamics = no space to amortize large (guaranteed unique) ids or claim/collide protocol
So: use small, random, ephemeral transaction ids? Locality is key: random ids much smaller than guaranteed
unique ids if total net size large and transaction density small
ID collisions lead to occasional losses; persistent losses avoided because the identifiers are constantly changing
Marginal cost of occasional losses is small compared to losses from dynamics, wireless conditions, collisions…
AFF Allows us to optimize # bits used for identifiers
Fewer bits = fewer wasted bits per data bit, but high collision rate; vs.
More bits = less waste due to ID collisions but many bits wasted on headers
Address-Free Fragmentation
Data Size=16 bits
Consider a memory hierarchy: registers, cache, main memory, swap space on disk
Due to locality, provides the illusion of a flat memory that has speed of registers but size & price of disk space
Similar goal in sensor nets: we want a spectrum of hardware within a network with the illusion of CPU/memory, range, scaling properties of large
nodes Price, numbers, power consumption, proximity
to physical phenomena of the smallest
Exploit Non-Uniformities I:Tiered Architecture
We are implementing a sensor net hierarchy: PC-104s, tags, motes, ephemeral one-shot sensors
Save energy by Running the lower power and more numerous
nodes at higher duty cycles than larger ones Having low-power “pre-processors” activate
higher power nodes or components (Sensoria approach)
Components within a node can be tiered too Our “tags” are a stack of loosely coupled boards Interrupts active high-energy assets only on
demand
Exploit Non-Uniformities I:Tiered Architecture
Exploit Non-Uniformities II:Time Synchronization
Time sync is critical at many layers; some affect energy use/system lifetime TDMA guard bands Data aggregation & caching Localization
But time sync needs are non-uniform Precision Lifetime Scope & Availability Cost and form factor
No single method optimal on all axes
Exploit Non-Uniformities II:Time Synchronization
Use multiple modes “Post-facto” synchronization pulse NTP GPS, WWVB Relative time “chaining”
Combinations can (?) be necessary and sufficient, to minimize resource waste Don’t spend energy to get better sync than
app needs Work in progress…
Conclusions
Many promising building blocks exist, butLong-lived often means highly vertically
integrated and application-specific Traditional layering often not possible
Challenge is creating reusable components common across systems
Create general-purpose tools for building networks, not general purpose networks