wqm: practical, adaptive, and lightweight wireless queue … · 2016-10-05 · • buffers exist at...
TRANSCRIPT
Basem ShihadaComputer Science & Electrical Engineering
CEMSE, KAUST
University of Waterloo Seminar
December 8th, 2014
WQM: Practical, Adaptive, and Lightweight Wireless Queue
Management System
2
3
How Bad are the Delays?
Pan,Rong,etal."PIE:ALightweightControlSchemetoAddresstheBufferbloat Problem.".InProceedingsofthe2013IEEEConferenceonHighPerformanceSwitchingandRouting,July2013 4
How Bad are the Delays (Bufferbloat)?
5
Measurementsinawiredaccessnetworkwithsystembackuptoaremoteserver.Consistentdelaysofover1sec.
Where are the Bloated Buffers?
6
• Buffers exist at multiple layers in the stack– Application layer
buffers– TCP socket buffers– Txqueue buffers– Device driver ring
buffers– Hardware buffers
K.Jamshaid,B.Shihada,A.Showail,andP.Levis,“DeflatinglinkbuffersinWirelessMeshNetworks",ElsevierJournalofAd-HocNetworks,Vol.16,pp.266-280,2014.
LargeFTP
Txqueue buffers are Bloated
7
(packetà1500B)
1000packets
Problem Statement
lowutilization,lowdelays
highthroughput,highdelays
Determinebuffersizetobalancethroughputanddelaytradeoff
8
Buffer Sizing Rule of Thumb
9
Router needs a buffer size of
– RTT is the two-way propagation delay– C is the bottleneck link capacity
CRouterSender Receiver
RTT
B=RTTXC
Rule of Thump Exception in Wireless Networks
• Wireless link: abstraction for shared spectrum
• Variable Frame Aggregation• Variable Packet Inter-Service Rate• Adaptive link rates
Challenges in Wireless Networks
• Wireless link: abstraction for shared spectrum– Bottleneck spread over multiple nodes
11
Gateway to Internet
Challenges in Wireless Networks• Wireless link: abstraction for shared spectrum
– Bottleneck spread over multiple nodes
• Variable Frame Aggregation– Impact of large aggregates with multiple sub-frames
12
A-MPDU Aggregate Size
N/A
64KB1MB
Challenges in Wireless Networks• Wireless link: abstraction for shared spectrum
– Bottleneck spread over multiple nodes
• Variable Frame Aggregation– Impact of large aggregates with multiple sub-frames
• Variable Packet Inter-Service Rate– Random MAC scheduling– Random noise and interference
14
Challenges in Wireless Networks• Wireless link: abstraction for shared spectrum
– Bottleneck spread over multiple nodes
• Variable Frame Aggregation– Impact of large aggregates with multiple sub-frames
• Variable Packet Inter-Service Rate– Random MAC scheduling– Sporadic noise and interference
• Adaptive link rates– With the default Linux buffer size, the time to empty a full buffer:
15
600Mb/s
6.5Mb/s
2ordersofmagnitude
A.Showail,K.Jamshaid,B.Shihada,“Buffersizinginwirelessnetworks:ChallengesandOpportunities",IEEECommunicationMagazine,Accepted,2014.
• Wireless link: abstraction for shared spectrum• Variable Frame Aggregation• Variable Packet Inter-Service Rate• Adaptive link rates
16
Severeperformancedegradationonthroughput,delay,dropping
What about Wireless Multi-Hop?
• Wireless link: abstraction for shared spectrum• Variable Frame Aggregation• Variable Packet Inter-Service Rate• Adaptive link rates
17
Severeperformancedegradationonthroughput,delay,dropping
Solution Framework
B.ShihadaandK.Jamshaid,"BufferSizingforMulti-hopWirelessNetworks,"U.S.PatentNo.:8,638,686.2014.
Collision Domains
18
6 5 4 3 2l6 l5 l4 l3
1l2
0l1
2-hopinterferencemodel:approximatesRTS/CTSusein802.11
Setofinterferinglinksthatcontendforchannelaccess
Bottleneck Collision Domain
19
6 5 4 3 2l6 l5 l4 l3
1l2
0l1
Setoflinksthatcontendwithmax.no.oflinks– Limitstheend-to-endrateofaflow
Bottleneck Collision Domain
20
6 5 4 3 2l6 l5 l4 l3
1l2
0l1
Setoflinksthatcontendwithmax.no.oflinks– Limitstheend-to-endrateofaflow
Bottleneck Collision Domain
21
6 5 4 3 2l6 l5 l4 l3
1l2
0l1
Setoflinksthatcontendwithmax.no.oflinks– Limitstheend-to-endrateofaflow
Bottleneck Collision Domain
22
6 5 4 3 2l6 l5 l4 l3
1l2
0l1
Setoflinksthatcontendwithmax.no.oflinks– Limitstheend-to-endrateofaflow
Instead of having big local buffers at each node consider thecombined effect of interfering nodes when sizing the buffer:
6 5 4 3 2l6 l5 l4 l3
1l2
0l1
Neighborhoodbuffersizeissumofbuffersofnodesinthebottleneckcollisiondomain(0through5)Note:Node6doesnotinterfere
Neighborhood Buffer
23
DNB Distributed Neighborhood Buffer
24
1) DeterminebottleneckbufferB
1) Assignbi tonodess.t.
B = R*RTT
𝐵 = # 𝑏%�
%∈)*++,-.-/0
Assigning Per-node Buffer• Drops close to source are preferable• Introduces a generic cost function
– cost of drop increases with hop count
M is the # of nodes in the bottleneck collision domain25
min#𝐷𝑟𝑜𝑝𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦×𝑐𝑜𝑠𝑡𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛C
%DE
subjectto ∑ 𝑏𝑖 = 𝐵C%DE
and 𝑏𝑖 ≥ 0, ∀𝑖 ∈ 𝑀
• Wireless link: abstraction for shared spectrum• Variable Frame Aggregation• Variable Packet Inter-Service Rate• Adaptive link rates
26
Severeperformancedegradationonthroughput,delay,dropping
Solution Framework
A.Showail,K.Jamshaid,B.Shihada,"AnEmpiricalEvaluationofBufferbloat inIEEE802.11nWirelessNetworks",IEEEWirelessCommunicationsandNetworkingConference(WCNC),pp.3088-3093,2014
Wireless Queue Management (WQM)
FrameAggregation LinkRate Channel
Utilization
adaptively set buffer size based on networkmeasurements
forcemax-minlimitsonqueue
size
queuing delay
vs. queue size
accountforchannelbusy
time
A.Showail,K.Jamshaid,andB.Shihada,"WQM:AnAggregation-awareQueueManagementSchemeforIEEE802.11nbasedNetworks",inProc.ACMSigcommCapacitySharingWorkshop(CSWS),pp.15-20,2014
WQM Operations
28
R
BL
N
Buffer
1.InitialPhase
2.AdjustmentPhase( )
)(NFRBLTdrain =
Binitial = R ´ ARTT
Bmax > B > Bmin
Testbed Topology
29
Nodesetup:10DistributedShuttleNodesatBuilding1,Level4.Softwaresetup:CustomizedLinuxkernelforstatisticscollectionNetworktrafficsetup:Largefiletransfers
30
DNBwithSingle-Flow
0
20
40
60
80
100
120
140
2-hop 3-hop 4-hopN
orm
aliz
ed d
elay
Topology
Delays normalized to results with proposed buffer sizes
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
2-hop 3-hop 4-hop
Nor
mal
ized
goo
dput
Topology
Goodput normalized to results with default buffer sizes
Two-orders of magnitude improvement in delay while achieving 90% goodput
31
DNBwithMulti-Flows
Scheme Avg.goodput MeanRTT
Defaultbuffersize 786 1653
Proposedbuffersizing
712 91
Intersecting3-hop&4-hopflowsinour10-nodestestbed
Average RTT is reduced by a factor of 20 at the cost of 9% drop in goodput
WQM Single Flow Multi-Hop Latency
1hop 2hops 3hops
32
224.4ms
90.43ms
49.47ms
Avg.
WQM Single Flow Multi-Hop Goodput
33
WQM Multi Flow Single-Hop Latency
1flow 3flows 5flows34
WQMreducesRTTby5xcomparedtodefaultbuffersand2xcomparedtoCoDel
WQM Multi Flow Single-Hop Goodput
35
JFIforthedefaultbuffersizeis0.77comparedto0.99forbothWQMandCoDel
WQM Multi Flow over a Single-Hop
36
w/o
WQMpreventsflowsfromfillingupthebuffersquicklywhilestarvingothers
Num.of
Flows
Default BufferSize WQM
Throughput(Mb/s)
RTT(ms)
Throughput(Mb/s)
RTT(ms)
1 155.7 61.51 134.35 13.1
2 78.43 65.8 69.21 13.47
3 51.96 420.66 45.77 14.19
4 39.22 213.19 33.96 14.91
5 31.38 937.56 27.41 14.93
Num.ofFlows
Jain’sfairnessindex(JFI)
Default BufferSize WQM
2 0.99 0.99
3 0.7 0.99
4 0.89 0.99
5 0.69 0.99
Latency improvement of >
5x with around10%
throughput drop
WQM Multi Flow Multi Hop Results
Source 1st Hop 2nd Hop 3rd Hop
Flow # 1
Flow # 2
Flow # 3
37
WQM Multi Flow over Multi-hops
38
Num.of
Flows
Default BufferSize WQM
Throughput(Mb/s)
RTT(ms)
Throughput(Mb/s)
RTT(ms)
1 68.32 169.44 61.08 33.76
2 34.52 165.3 32.1 35.32
3 22.89 177.09 20.4 38.83
4 17.06 186.29 15.94 38.55
5 13.76 193.47 12.54 38.2
WQM reduces RTT by 5× with the cost of 10% drop in throughput in the worst case
WQM Testbed Results
WQM adaptively sets queue size in response to changing network conditions39
DefaultScheme ProposedScheme
Networktrafficsetup:1filetransferinthebackground+real-timecommunication
Latency:12ximprovement Average/Expectation:10ximprovement
Jitter: 17ximprovement Average/Expectation:10ximprovement
Data Traffic
Audio demo
• Network traffic setup: 1 file transfer in the background + 1 real-time audio stream
41
Wireless- Default Wireless- Proposed Wired
Video demo
• Network traffic setup: 1 file transfer in background + real-time video streaming
42
WQM for KAUST New Energy Oases (NEO)
• WQM has been applied in practical scenarios in collaboration with KAUST Economic Development (Innovation Cluster).
• Testbed that consists of ten nodes has been configured to be integrated with the solar panels in order to replace the 3G modems provided by Mobily.
• Our solution was to forward all packets that come from the wired interface to the wireless interface within the hop itself without reconfiguring the solar panels itself.
• We configured the next hop based on a predefined routing table in a multi-hop fashion till the network gateway using our WQM technology.
• The objective is to “fairly” allocate channel resources among WMN nodes 1,2,3.
• Proposed a distributed MAC layer protocol, called T-MAC, which extends Lamport’s mutual exclusion algorithm to frame scheduling in WMN.
• Using analytical modeling of TCP streams, we derive a closed-form solution for throughput2
• T-MAC implemented in ns-3. Our results achieve fairness while maintaining high network utilization3
1 F.Nawab,K.Jamshaid,B.Shihada,andP-H.Ho,"TMAC:Timestamp-orderedMACforCSMA/CAWirelessMeshNetworks",In Proc.IEEEICCCN2011.2 F.Nawab,K.Jamshaid,B.Shihada,andP-H.Ho,"MAC-LayerProtocolforTCPFairnessinWirelessMeshNetworks",InProc.IEEEICCC 2012.3F.Nawab,K.Jamshaid,B.Shihada,andP-H.Ho," FairPacketSchedulinginWirelessMeshNetworks",ElsevierJournalofAd-HocNetworks,Vol.13,PartB,pp.
414-427,2014.
Fairness in Wireless Multi-Hop
• The objective is to minimize the energy consumption at the energy-critical nodes and the overall network transmission delay1,3.
• The transmission rates of energy-critical nodes are adjusted according to its local packet queue size.
• We proved that there exists a threshold type control which is optimal1.
• We implemented a decentralized algorithm to control the packets scheduling of these energy-critical nodes2,4.
1 L.XiaandB.Shihada,“DecentralizedTransmissionSchedulinginEnergy-CriticalMulti-HopWirelessNetworks"inProc.AmericanControlConference pp.113-118,2013.2 L.XiaandB.Shihada,“Max-MinOptimalityofServiceRateControlinClosedQueueing Networks,"IEEETransactionsonAutomaticControl,Vol.58,No.4,pp.1051-1056,2013.3 LiXiaandB.Shihada,"PowerandDelayOptimizationforMulti-HopWirelessNetworks," InternationalJournalofControl,Vol.87,No.6,pp.1252-1265,2014.4 L.XiaandB.Shihada,"AJacksonNetworkModelandThresholdPolicyforJointOptimizationofEnergyandDelayinMulti-HopWirelessNetworks",EuropeanJournalofOperationalResearch,Accepted,2014
Energy in Multi-Hop Networks
KAUST NetLab Members
Collaborators
Prof. Kang Shin Prof.Pin-HanHo Prof.PhilipLevis Prof.RaduStoleru
Concluding Remarks• Challenge: Choosing the optimal queue size
in wireless networks• Proposed Solutions:
– DNB: sizing bottleneck buffers and distributing it among nodes
– WQM: chooses the queue size based on network load and channel condition
• Performance: Improvements in latency by at least 5x over default Linux buffers
• Feature: Improvements in network fairness by limiting the ability of a single flow to saturate the buffers
48
Questions/Comments/Feedback
49