cross-layer design of networking protocols in wireless
TRANSCRIPT
CROSS-LAYER DESIGN OF NETWORKING PROTOCOLS IN WIRELESS LOCALAREA NETWORKS AND MOBILE AD HOC NETWORKS
By
HONGQIANG ZHAI
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOLOF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
2006
Copyright 2006
by
Hongqiang Zhai
Dedicated to my beloved parents and brothers.
ACKNOWLEDGMENTS
First and foremost, I would like to express my sincere gratitude to my advisor, Pro-
fessor Yuguang Fang, for his invaluable advice, encouragement and motivation during the
course of this work. This dissertation would not have been possible without his guid-
ance and support. I also thank him for his philosophical advice on both my academic and
nonacademic life, which made me more mature, scholastically and personally.
I thank Professors Shigang Chen, Jose Fortes, Pramod Khargonekar and Sartaj Sahni
for serving on my supervisory committee and for their valuable suggestions and construc-
tive criticism. Thanks also go to Prof. John Shea, Prof. Tan Wong and Prof. Dapeng Wu,
for their many constructive suggestions and advice.
Many thanks are due to my colleagues Dr. Xiang Chen and Jianfeng Wang for their
collaboration. I also thank Dr. Younggoo Kwon, Dr. Wenjing Lou, Dr. Wenchao Ma, Dr.
Wei Liu, Dr. Byung-Seo Kim, Dr. Xuejun Tian, Dr. Sungwon Kim, Dr. Jae Sung Lim, Yu
Zheng, Yanchao Zhang, ShushanWen, Xiaoxia Huang, Yun Zhou, Jing Zhao, Chi Zhang,
Frank Goergen, Pan Li, Feng Chen, Shan Zhang, Rongsheng Huang and many others at
University of Florida for the years of friendship and many helpful discussions.
Last but not least, I owe a special debt of gratitude to my parents and my brothers.
Without their selfless love and support, I would never imagine what I have achieved.
iv
TABLE OF CONTENTSpage
ACKNOWLEDGMENTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii
LIST OF FIGURES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiii
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xviii
CHAPTER
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Organization of the Dissertation. . . . . . . . . . . . . . . . . . . . . 2
2 PERFORMANCE OF THE IEEE 802.11 DCF PROTOCOL IN WIRELESSLANS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
2.2.1 Distributed Coordination Function (DCF). . . . . . . . . . . . 102.2.2 System Modeling. . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 The Probability Distribution of the MAC Layer Service Time. . . . . . 122.3.1 MAC Layer Service Time. . . . . . . . . . . . . . . . . . . . . 122.3.2 Probability Generating Functions (PGF) of MAC Layer Service
Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132.3.3 The Processes of Collision and Successful Transmission. . . . . 152.3.4 Decrement Process of Backoff Timer. . . . . . . . . . . . . . . 162.3.5 Markov Chain Model for the Exponential Backoff Procedure. . 172.3.6 Generalized State Transition Diagram. . . . . . . . . . . . . . 182.3.7 Probability Distribution Modeling . . . . . . . . . . . . . . . . 202.3.8 Derivation of Transmission Probability. . . . . . . . . . . . . . 23
2.4 Queueing Modeling and Analysis. . . . . . . . . . . . . . . . . . . . . 252.4.1 Problem formulation. . . . . . . . . . . . . . . . . . . . . . . . 252.4.2 The steady-state probability of the M/G/1/K queue. . . . . . . . 262.4.3 Conditional Collision Probability pc and Distribution of MAC
Layer Service Time. . . . . . . . . . . . . . . . . . . . . . . 272.4.4 Performance Metrics of the Queueing System. . . . . . . . . . 272.4.5 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . .272.4.6 Numerical Results. . . . . . . . . . . . . . . . . . . . . . . . . 28
v
2.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . 292.5.1 Simulation Environments. . . . . . . . . . . . . . . . . . . . . 292.5.2 Probability Distribution of MAC Layer Service Time. . . . . . 302.5.3 Comparison of M/G/1/K and M/M/1/K Approximations with
Simulation Results . . . . . . . . . . . . . . . . . . . . . . . 302.6 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
3 HOW WELL CAN THE IEEE 802.11 DCF PROTOCOL SUPPORT QOS INWIRELESS LANS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
3.2.1 Operations of the IEEE 802.11. . . . . . . . . . . . . . . . . . 373.2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . .38
3.3 Analytical Study of the IEEE 802.11. . . . . . . . . . . . . . . . . . . 403.3.1 Maximum Throughput and Available Bandwidth. . . . . . . . . 403.3.2 Delay and Delay Variation. . . . . . . . . . . . . . . . . . . . 473.3.3 Packet Loss Rate. . . . . . . . . . . . . . . . . . . . . . . . . .54
3.4 Simulation Study of the IEEE 802.11. . . . . . . . . . . . . . . . . . . 563.4.1 Simulation Configuration. . . . . . . . . . . . . . . . . . . . . 563.4.2 Simulation Results. . . . . . . . . . . . . . . . . . . . . . . . . 58
3.5 Discussions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603.5.1 Impact of Fading Channel. . . . . . . . . . . . . . . . . . . . . 603.5.2 Impact of Prioritized MAC . . . . . . . . . . . . . . . . . . . . 61
3.6 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
4 A CALL ADMISSION AND RATE CONTROL SCHEME FOR MULTIME-DIA SUPPORT OVER IEEE 802.11 WIRELESS LANS. . . . . . . . . . . 62
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624.2 Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
4.2.1 Operations of the IEEE 802.11 DCF Protocol. . . . . . . . . . 654.2.2 QoS Requirements for Multimedia Services. . . . . . . . . . . 66
4.3 Channel Busyness Ratio. . . . . . . . . . . . . . . . . . . . . . . . . .674.3.1 Definition of Channel Busyness Ratio. . . . . . . . . . . . . . 674.3.2 Channel busyness ratio: an accurate sign of the network utilization684.3.3 Measurement of Channel Busyness Ratio. . . . . . . . . . . . . 71
4.4 CARC: Call Admission and Rate Control. . . . . . . . . . . . . . . . 714.4.1 Design Rationale. . . . . . . . . . . . . . . . . . . . . . . . . 724.4.2 Call Admission Control. . . . . . . . . . . . . . . . . . . . . . 744.4.3 Rate Control. . . . . . . . . . . . . . . . . . . . . . . . . . . .76
4.5 Performance Evaluation of CARC. . . . . . . . . . . . . . . . . . . . 794.5.1 Simulation Configuration. . . . . . . . . . . . . . . . . . . . . 794.5.2 Simulation Results. . . . . . . . . . . . . . . . . . . . . . . . . 80
4.6 Discussions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .854.6.1 Impact of Fading Channel. . . . . . . . . . . . . . . . . . . . . 85
vi
4.6.2 Impact of Prioritized MAC . . . . . . . . . . . . . . . . . . . . 864.7 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87
5 DISTRIBUTED FAIR AND EFFICIENT RESOURCE ALLOCATION WITHQOS SUPPORT OVER IEEE 802.11 WLANS. . . . . . . . . . . . . . . . 88
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .885.2 Design Rationale. . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
5.2.1 Efficiency and QoS. . . . . . . . . . . . . . . . . . . . . . . . 925.2.2 Fairness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
5.3 Distributed Resource Allocation (DRA). . . . . . . . . . . . . . . . . 955.3.1 Basic Framework. . . . . . . . . . . . . . . . . . . . . . . . . 965.3.2 Fairness Support. . . . . . . . . . . . . . . . . . . . . . . . . .1005.3.3 QoS Support. . . . . . . . . . . . . . . . . . . . . . . . . . . .1005.3.4 Multiple Channel Rates Support. . . . . . . . . . . . . . . . .102
5.4 Convergence Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . .1025.4.1 Convergence of Multiplicative-Increase Phase. . . . . . . . . .1025.4.2 Convergence to Fairness Equilibrium. . . . . . . . . . . . . . .1055.4.3 Discussion. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1095.4.4 Parameter Selection. . . . . . . . . . . . . . . . . . . . . . . .110
5.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . .1105.5.1 Simulation Setup. . . . . . . . . . . . . . . . . . . . . . . . .1105.5.2 Channel Busyness Ratio Threshold. . . . . . . . . . . . . . . .1115.5.3 Fairness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1125.5.4 Efficiency, Delay and Collision. . . . . . . . . . . . . . . . . .1155.5.5 Quality of Service. . . . . . . . . . . . . . . . . . . . . . . . .116
5.6 Related Work and Discussions. . . . . . . . . . . . . . . . . . . . . .1195.7 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
6 PHYSICAL CARRIER SENSING AND SPATIAL REUSE IN MULTIRATEAND MULTIHOP WIRELESS AD HOC NETWORKS . . . . . . . . . . .123
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1236.2 Optimum Carrier Sensing Range. . . . . . . . . . . . . . . . . . . . .127
6.2.1 Aggregate Throughput and SINR at the Worst Case. . . . . . . 1276.2.2 Maximum Throughput and Optimum Carrier Sensing Range un-
der Shannon Capacity. . . . . . . . . . . . . . . . . . . . . .1306.2.3 Maximum Throughput and Optimum Carrier Sensing Range un-
der the Discrete Channel Rates of the IEEE 802.11. . . . . . 1316.2.4 Impact of Random Topology. . . . . . . . . . . . . . . . . . .1336.2.5 Tradeoff between Exposed Terminal Problem and the Hidden
Terminals Problem. . . . . . . . . . . . . . . . . . . . . . .1346.2.6 Carrier Sensing Range and Strategies for Bidirectional Hand-
shakes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1366.2.7 Optimum Carrier Sensing Range. . . . . . . . . . . . . . . . .140
vii
6.3 Utilize Multirate Capability of 802.11 in Wireless Multihop Ad HocNetworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
6.3.1 How to Set the Carrier Sensing Threshold for Multirate 802.11MAC protocol . . . . . . . . . . . . . . . . . . . . . . . . . .140
6.3.2 How to Choose Next Hops, Channel Rates and Set the CarrierSensing Threshold for Multihop Flows. . . . . . . . . . . . .142
6.4 Simulation Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . .1496.4.1 NS2 Extensions and Simulation Setup. . . . . . . . . . . . . .1506.4.2 Optimum Carrier Sensing Range. . . . . . . . . . . . . . . . .1506.4.3 Spatial Reuse and End-to-End Performance of Multihop Flows. 153
6.5 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .154
7 A DUAL-CHANNEL MAC PROTOCOL FOR MOBILE AD HOC NETWORKS156
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1567.2 Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160
7.2.1 Physical Model . . . . . . . . . . . . . . . . . . . . . . . . . .1607.2.2 Transmission Range and Sensing/Interference Range. . . . . . 160
7.3 Problems and The Desired Protocol Behavior. . . . . . . . . . . . . .1617.3.1 Hidden and Exposed Terminal Problem. . . . . . . . . . . . . .1617.3.2 Limitations of NAV Setup Procedure. . . . . . . . . . . . . . .1627.3.3 Receiver Blocking Problem. . . . . . . . . . . . . . . . . . . .1637.3.4 Intra-Flow Contention. . . . . . . . . . . . . . . . . . . . . . .1647.3.5 Inter-flow Contention. . . . . . . . . . . . . . . . . . . . . . .1657.3.6 The Desired Protocol Behavior. . . . . . . . . . . . . . . . . .1657.3.7 Limitation of IEEE 802.11 MAC Using Single Channel. . . . . 166
7.4 DUCHA: A New Dual-Channel MAC Protocol. . . . . . . . . . . . .1667.4.1 Protocol Overview. . . . . . . . . . . . . . . . . . . . . . . . .1667.4.2 Basic Message Exchange. . . . . . . . . . . . . . . . . . . . .1677.4.3 Solutions to the Aforementioned Problems. . . . . . . . . . . .1697.4.4 Remarks on the proposed protocol. . . . . . . . . . . . . . . .171
7.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . .1727.5.1 Simulation Environments. . . . . . . . . . . . . . . . . . . . .1727.5.2 Simple Scenarios. . . . . . . . . . . . . . . . . . . . . . . . .1737.5.3 Random Topology for One-hop Flows. . . . . . . . . . . . . .1767.5.4 Random Topology for Multihop Flows. . . . . . . . . . . . . .178
7.6 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181
8 A SINGLE-CHANNEL SOLUTION TO HIDDEN/EXPOSED TERMINALPROBLEMS IN WIRELESS AD HOC NETWORKS . . . . . . . . . . . .183
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1838.2 Various Ranges in Wireless Multihop Ad Hoc Networks. . . . . . . . .1888.3 Addressing the Hidden/Exposed Terminal Problems with Short Busy
Advertisement Signal. . . . . . . . . . . . . . . . . . . . . . . . . .1898.3.1 Basic Operations in the SBA Procedure. . . . . . . . . . . . . .190
viii
8.3.2 Mitigating Exposed Terminal Problem by Adjusting Carrier Sens-ing Range. . . . . . . . . . . . . . . . . . . . . . . . . . . .191
8.3.3 Parameters in SBA Procedure. . . . . . . . . . . . . . . . . . .1918.3.4 Positions of IDFS Periods in the DATA Frame. . . . . . . . . .1938.3.5 Busy Advertisement Signal. . . . . . . . . . . . . . . . . . . .1958.3.6 Power Control for Short Busy Advertisement. . . . . . . . . . .1958.3.7 Start and Stop SBA Procedure. . . . . . . . . . . . . . . . . .1968.3.8 Synchronization Issue. . . . . . . . . . . . . . . . . . . . . . .1988.3.9 Accumulative Acknowledgement. . . . . . . . . . . . . . . . .1988.3.10 CTS Dominance. . . . . . . . . . . . . . . . . . . . . . . . . .1998.3.11 Compatibility with Legacy 802.11 MAC Scheme. . . . . . . .199
8.4 Maximize Spatial Reuse Ratio and Minimize Power Consumption byPower Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
8.4.1 Power Control for Both DATA Frame and Busy Advertisementin SBA-MAC . . . . . . . . . . . . . . . . . . . . . . . . . .200
8.4.2 Power Control for the Approach Using A Large Carrier SensingRange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202
8.5 Performance Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . .2048.5.1 Spatial Reuse Ratio. . . . . . . . . . . . . . . . . . . . . . . .2048.5.2 Protocol Overhead. . . . . . . . . . . . . . . . . . . . . . . . .2048.5.3 Numerical Results. . . . . . . . . . . . . . . . . . . . . . . . .206
8.6 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .208
9 A DISTRIBUTED PACKET CONCATENATION SCHEME FOR SENSORAND AD HOC NETWORKS . . . . . . . . . . . . . . . . . . . . . . . . .211
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2119.2 Operations of the IEEE 802.11. . . . . . . . . . . . . . . . . . . . . .2139.3 Adaptive Packet Concatenation (APC) Scheme and Performance Analysis214
9.3.1 Basic Scheme. . . . . . . . . . . . . . . . . . . . . . . . . . .2149.3.2 Performance Analysis of the Network Throughput in the Single
Hop Case . . . . . . . . . . . . . . . . . . . . . . . . . . . .2179.3.3 Performance Analysis of the Network Throughput in a Multihop
Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2219.4 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225
10 IMPACT OF ROUTING METRICS ON PATH CAPACITY IN MULTIRATEAND MULTIHOP WIRELESS AD HOC NETWORKS . . . . . . . . . . .226
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22610.2 Impact of Multirate Capability on Path Selection In Wireless Ad Hoc
Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23110.2.1 Receiver Sensitivity and SNR for Multiple Rates. . . . . . . . .23110.2.2 Tradeoff between the rate and the transmission distance. . . . . 23210.2.3 Carrier Sensing Range, Interference and Spatial Reuse. . . . . . 23210.2.4 Effective Data Rate and Protocol Overhead. . . . . . . . . . . .233
ix
10.3 Path Capacity in Wireless Ad Hoc Networks. . . . . . . . . . . . . . .23410.3.1 Link Conflict Graph. . . . . . . . . . . . . . . . . . . . . . . .23510.3.2 Upper Bound of Path Capacity in Single Interference Model. . . 23710.3.3 Exact Path Capacity in Single Interference Model. . . . . . . .24010.3.4 Path Capacity in Multi-Interference Model with Variable Link
Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24210.3.5 Extended to Multiple Paths between a Source and Its Destina-
tion or between Multiple Pairs of Source and Destination. . . 24310.3.6 Consider the packet error rate over each link in the link schedul-
ing algorithm . . . . . . . . . . . . . . . . . . . . . . . . . .24410.4 Path Selection in Wireless Ad Hoc Networks. . . . . . . . . . . . . . .244
10.4.1 Optimal Path Selection. . . . . . . . . . . . . . . . . . . . . .24510.4.2 Using Routing Metrics in Path Selection. . . . . . . . . . . . .246
10.5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . .24710.5.1 Simulation Setup. . . . . . . . . . . . . . . . . . . . . . . . .24710.5.2 Compared with Optimal Routing. . . . . . . . . . . . . . . . .24810.5.3 Performance Evaluation of Six Routing Metrics in a Larger Topol-
ogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24910.5.4 Path Capacity of a Single-Rate Network. . . . . . . . . . . . .252
10.6 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .253
11 DISTRIBUTED FLOW CONTROL AND MEDIUM ACCESS CONTROL INMOBILE AD HOC NETWORKS . . . . . . . . . . . . . . . . . . . . . . .255
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25511.2 Impact of MAC Layer Contentions on Traffic Flows. . . . . . . . . . .25811.3 OPET: Optimum Packet Scheduling for Each Traffic Flow. . . . . . . .261
11.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26111.3.2 Rule 1: Assigning High Channel Access Priority to Receivers. . 26111.3.3 Rule 2: Backward-Pressure Scheduling. . . . . . . . . . . . . .26311.3.4 Rule 3: Source Self-Constraint Scheme. . . . . . . . . . . . . .26811.3.5 Rule 4: Round Robin Scheduling. . . . . . . . . . . . . . . . .270
11.4 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . .27111.4.1 Simple Scenarios. . . . . . . . . . . . . . . . . . . . . . . . .27211.4.2 Random Topology. . . . . . . . . . . . . . . . . . . . . . . . .27311.4.3 Random Topology with Mobility. . . . . . . . . . . . . . . . .27611.4.4 Simulation results for TCP traffic. . . . . . . . . . . . . . . . .27711.4.5 Notes on the relative benefits of the four techniques. . . . . . . 279
11.5 Related Works and Discussion. . . . . . . . . . . . . . . . . . . . . .28011.6 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282
12 WCCP: IMPROVING TRANSPORT LAYER PERFORMANCE IN MULTI-HOP AD HOC NETWORKS BY EXPLOITING MAC LAYER INFOR-MATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283
x
12.2 Medium Contention and Its Impact. . . . . . . . . . . . . . . . . . . .28612.2.1 TCP Performance Degradation Due to Coupling of Congestion
and Medium Contention. . . . . . . . . . . . . . . . . . . .28612.2.2 Optimal Congestion Window Size for TCP and Ideal Sending Rate28812.2.3 Unfairness Problem Due to Medium Contention. . . . . . . . .290
12.3 Wireless Congestion Control Protocol (WCCP). . . . . . . . . . . . .29212.3.1 Channel Busyness Ratio: Sign of Congestion and Available Band-
width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29212.3.2 Measurement of Channel Busyness Ratio in Multihop Ad Hoc
Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . .29412.3.3 Inter-node Resource Allocation. . . . . . . . . . . . . . . . . .29512.3.4 Intra-node Resource Allocation. . . . . . . . . . . . . . . . . .29712.3.5 End-to-End Rate-Based Congestion Control Scheme. . . . . . . 299
12.4 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . .30212.4.1 Chain Topology. . . . . . . . . . . . . . . . . . . . . . . . . .30312.4.2 Random Topology. . . . . . . . . . . . . . . . . . . . . . . . .308
12.5 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .308
13 CONCLUSIONS AND FUTURE WORK. . . . . . . . . . . . . . . . . . . .310
13.1 Fairness in Mobile Ad Hoc Networks. . . . . . . . . . . . . . . . . . .31013.2 Quality of Service in Mobile Ad Hoc Networks. . . . . . . . . . . . .313
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315
BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .328
xi
LIST OF TABLESTable page
2–1 IEEE 802.11 system parameters. . . . . . . . . . . . . . . . . . . . . . . . 22
2–2 Saturation value of collision probability. . . . . . . . . . . . . . . . . . . . 22
3–1 QoS requirements for multimedia services. . . . . . . . . . . . . . . . . . 36
3–2 IEEE 802.11 system parameters. . . . . . . . . . . . . . . . . . . . . . . . 42
4–1 IEEE 802.11 system parameters. . . . . . . . . . . . . . . . . . . . . . . . 71
4–2 The mean, standard deviation (SD), and 97’th, 99’th, 99.9’th percentile de-lays (in seconds) for voice and video in the infrastructure mode.. . . . . . 83
4–3 The mean, standard deviation (SD), and 97’th, 99’th, 99.9’th percentile de-lays (in seconds) for voice and video in the ad hoc mode.. . . . . . . . . 85
6–1 Signal-to-noise ratio and receiver sensitivity. . . . . . . . . . . . . . . . .131
7–1 Default values in the simulations. . . . . . . . . . . . . . . . . . . . . . . .172
9–1 IEEE 802.11 system parameters. . . . . . . . . . . . . . . . . . . . . . . .220
10–1 Signal-to-noise ratio and receiver sensitivity. . . . . . . . . . . . . . . . .232
10–2 Run time of different routing algorithms. . . . . . . . . . . . . . . . . . . .253
12–1 Simulation results for TCP and UDP flows. . . . . . . . . . . . . . . . . .289
12–2 Performance of WCCP and TCP in chain topology of Fig.12–3(a) . . . . . 303
xii
LIST OF FIGURESFigure page
2–1 RTS/CTS mechanism and basic access mechanism of IEEE 802.11. . . . 11
2–2 Generalized state transition diagram of one example. . . . . . . . . . . . 15
2–3 Generalized state transition diagram for transmission process. . . . . . . 19
2–4 Probability distribution of MAC layer service time. . . . . . . . . . . . . 21
2–5 PDF of service time. . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
2–6 Mean of service time. . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
2–7 Queue characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
2–8 MAC layer packet service time. . . . . . . . . . . . . . . . . . . . . . . 30
2–9 Comparisons between M/G/1/K, M/M/1/K models and simulation. . . . . 31
2–10 Average waiting time in non-saturated status. . . . . . . . . . . . . . . . 32
2–11 Average MAC layer service time. . . . . . . . . . . . . . . . . . . . . . 33
3–1 Channel busyness ratio and utilization. . . . . . . . . . . . . . . . . . . . 41
3–2 Collision probability and maximum normalized throughput with RTS/CTSand payload size of 8000bits. . . . . . . . . . . . . . . . . . . . . . . 45
3–3 Impact of payload size and the RTS/CTS mechanism. . . . . . . . . . . . 47
3–4 Mean and standard deviation of service time. . . . . . . . . . . . . . . . 49
3–5 Packet delay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
3–6 Simulation results when payload size = 8000bits. . . . . . . . . . . . . . 57
3–7 Simulation results when n=50 and payload size = 8000bits. . . . . . . . . 58
3–8 Simulation results when n=50 and payload size = 8000bits. . . . . . . . . 60
4–1 Channel busyness ratio and utilization. . . . . . . . . . . . . . . . . . . . 70
4–2 Simulation results when number of nodes equals 50 and RTS/CTS mech-anism is used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
xiii
4–3 Infrastructure mode: the number of real-time and TCP flows increasesover time. Channel rate is 2 Mbps.. . . . . . . . . . . . . . . . . . . . 82
4–4 End-to-end delay of all voice and video packets in infrastructure mode. . 83
4–5 Ad hoc mode: the number of real-time and TCP flows increases over time.Channel rate is 2 Mbps.. . . . . . . . . . . . . . . . . . . . . . . . . .84
4–6 End-to-end delay of all voice and video packets in ad hoc mode. . . . . . 85
4–7 Channel utilization in ad hoc mode. . . . . . . . . . . . . . . . . . . . . 86
5–1 Maximum and saturated throughput with different number of nodes (RTS/CTSis used, packet length = 1000bytes, channel rate = 11Mbps). . . . . . . 94
5–2 Convergence speed of multiplicative-increase phase (packet length = 1000bytes,channel rate = 11Mbps). . . . . . . . . . . . . . . . . . . . . . . . . .105
5–3 Convergence speed of AIMD phases whenδ = 0.5 . . . . . . . . . . . . .109
5–4 Impact of payload sizeL and number of nodesn on the optimal thresholdfor channel busyness ratiobrth . . . . . . . . . . . . . . . . . . . . . .111
5–5 Fairness convergence with RTS/CTS: one greedy node joins the networkevery 10 seconds (packet length = 1000bytes, each point is averagedover 1 second). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
5–6 Max-min fairnessunder different traffic rates (packet length = 1000bytes). 114
5–7 DRA: fairness with multiple channel bit rates (RTS/CTS is used). . . . . 115
5–8 802.11: fairness with multiple channel bit rates (RTS/CTS is used). . . . 115
5–9 Throughput, MAC delay and collision probability with RTS/CTS. . . . . 117
5–10 QoS performance in DRA. . . . . . . . . . . . . . . . . . . . . . . . . .118
5–11 QoS performance in 802.11. . . . . . . . . . . . . . . . . . . . . . . . .118
6–1 Interference model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128
6–2 Carrier sensing threshold with Shanon Capacity. . . . . . . . . . . . . .130
6–3 Carrier sensing threshold with different SINR. . . . . . . . . . . . . . . .131
6–4 Carrier sensing threshold with discrete channel rates of 802.11. . . . . . . 132
6–5 Tradeoff between exposed terminal problem and hidden terminal problem. 134
6–6 Large carrier sensing range with carrier sensing strategy II for CTS/ACK. 139
6–7 Multiple carrier sensing thresholds may result in collisions. . . . . . . . .141
xiv
6–8 Bandwidth distance product. . . . . . . . . . . . . . . . . . . . . . . . .144
6–9 Maximum end-to-end throughput for different hop distance. . . . . . . .145
6–10 Spatial reuse ratio for multihop flows (a) at worst case, (b) in a single chaintopology with one way traffic. . . . . . . . . . . . . . . . . . . . . . .147
6–11 Optimum carrier sensing threshold for one-hop flows. . . . . . . . . . . .152
6–12 Optimum carrier sensing threshold for multi-hop flows. . . . . . . . . . .152
7–1 A simple scenario to illustrate the problems. . . . . . . . . . . . . . . . .162
7–2 Chain topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165
7–3 Proposed protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
7–4 One simple topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
7–5 Simulation results for the simple topology. . . . . . . . . . . . . . . . .174
7–6 End-to-end throughput for the 9-node chain topology. . . . . . . . . . . .177
7–7 Simulation results for random one-hop flows with different minimum onehop distance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177
7–8 Simulation results for multihop flows in random topology. . . . . . . . .179
8–1 Hidden terminal problem. . . . . . . . . . . . . . . . . . . . . . . . . .184
8–2 Carrier sensing range and interference range in LCS and SBA-MAC. . . . 185
8–3 Four-way handshake with busy advertisement signals. . . . . . . . . . . .190
8–4 Positions of IDFS periods in the DATA frame. . . . . . . . . . . . . . . .193
8–5 Power control in SBA-MAC. . . . . . . . . . . . . . . . . . . . . . . . .200
8–6 Occupied area for a transmission normalized over the communication ra-dius (PC: power control for DATA frames). . . . . . . . . . . . . . . .207
8–7 Occupied area for a transmission normalized over the communication ra-dius whendh = dt . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
8–8 Channel time for a transmitted packet. . . . . . . . . . . . . . . . . . . .208
8–9 Channel time for a transmitted packet. . . . . . . . . . . . . . . . . . . .209
8–10 Performance gain of SBA-MAC compared to the approach using a largecarrier sensing range and the FAMA scheme. . . . . . . . . . . . . . .209
9–1 RTS/CTS mechanism and basic access mechanism of IEEE 802.11. . . . 214
xv
9–2 Protocol stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .215
9–3 The super packet structure. . . . . . . . . . . . . . . . . . . . . . . . . .215
9–4 Throughput when channel rate is 1Mbps,Lth = 2346bytes and RTS/CTSmechanism is used.. . . . . . . . . . . . . . . . . . . . . . . . . . . .220
9–5 Throughput when channel rate is 1, 2, 5.5 and 11Mbps and RTS/CTSmechanism is used.. . . . . . . . . . . . . . . . . . . . . . . . . . . .221
9–6 Chain topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
9–7 Maximum end-to-end throughput of a multihop flow.. . . . . . . . . . . .224
9–8 Maximum end-to-end throughput of a multihop flow.. . . . . . . . . . . .224
10–1 Paths between the sourceS and the destinationD . . . . . . . . . . . . .230
10–2 A five-link chain topology and its link Conflict graph. . . . . . . . . . . .235
10–3 A path with an odd cycle in the link conflict graph. . . . . . . . . . . . .239
10–4 Path capacity for different routing algorithms. . . . . . . . . . . . . . . .249
10–5 Path capacity for different routing algorithms. . . . . . . . . . . . . . . .250
10–6 Path lengths for different routing algorithms. . . . . . . . . . . . . . . .251
10–7 Source-destination distance. . . . . . . . . . . . . . . . . . . . . . . . .251
10–8 Path capacity solving time. . . . . . . . . . . . . . . . . . . . . . . . . .252
10–9 Path capacity for a single rate network. . . . . . . . . . . . . . . . . . .254
11–1 Chain topology and cross topology. . . . . . . . . . . . . . . . . . . . .259
11–2 TCP performance in a 9-node chain topology. . . . . . . . . . . . . . . .260
11–3 Optimum packet scheduling for chain topology. . . . . . . . . . . . . . .263
11–4 The packet format of RTSM and CTSR. . . . . . . . . . . . . . . . . . .266
11–5 The algorithms of backward-pressure scheme. . . . . . . . . . . . . . . .267
11–6 Message sequence for packet transmission. . . . . . . . . . . . . . . . .268
11–7 The packet scheduling for resolving congestion. . . . . . . . . . . . . . .269
11–8 Simulation results for the 9-node chain topology (Fig.11–3) and crosstopology (Fig.11–1(b)) . . . . . . . . . . . . . . . . . . . . . . . . . .272
11–9 Simulation results for the random topology. . . . . . . . . . . . . . . . .274
xvi
11–10 Simulation results for the random topology with mobility. . . . . . . . .277
11–11 Simulation results for the TCP traffic. . . . . . . . . . . . . . . . . . . .278
11–12 Grid topology with 16 TCP flows. . . . . . . . . . . . . . . . . . . . . .279
12–1 Chain topology with 9 nodes. Small circles denote the transmission range,and the large circles denote the sensing range. . . . . . . . . . . . . . .286
12–2 Simulation results for 9-node chain topology. . . . . . . . . . . . . . . .287
12–3 Nine-node chain topology with different traffic distribution. . . . . . . . .291
12–4 The relationship between channel busyness ratio and other metrics. . . . . 293
12–5 Rate control mechanism. . . . . . . . . . . . . . . . . . . . . . . . . . .300
12–6 Simulation results for the nine-node chain topology with one flow. . . . . 304
12–7 Performance of scenario Fig.12–3(b) . . . . . . . . . . . . . . . . . . . .305
12–8 Performance of scenario Fig.12–3(c) . . . . . . . . . . . . . . . . . . . .306
12–9 Simulation results for random topology with precomputed paths: (a) min-imum flow throughput in 20 runs, (b) minimum flow throughput aver-aged over 20 runs, (c) maximum flow throughput averaged over 20 runs,(d) ratio of averaged maximum flow throughput to averaged minimumflow throughput.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .307
12–10 Simulation results averaged over 20 runs in the random topology: (1) ag-gregate throughput (Mbps), (2) fairness index, (3) end-to-end delay (s).. 308
13–1 An original topology and its flow contention graph. . . . . . . . . . . . .311
xvii
Abstract of Dissertation Presented to the Graduate Schoolof the University of Florida in Partial Fulfillment of theRequirements for the Degree of Doctor of Philosophy
CROSS-LAYER DESIGN OF NETWORKING PROTOCOLS IN WIRELESS LOCALAREA NETWORKS AND MOBILE AD HOC NETWORKS
By
Hongqiang Zhai
August 2006
Chair: Yuguang “Michael” FangMajor Department: Electrical and Computer Engineering
This Ph.D. dissertation focuses on design and analysis of efficient networking proto-
cols in wireless local area networks and ad hoc networks. Known as WI-FI technology,
wireless local area networks have become very popular today as an easy way of wire-
less access to the Internet. Wireless ad hoc networks also find a lot of applications which
need wireless access or require a low cost or an immediate deployment of networked sys-
tems, like battlefield communications, public safety networks, disaster rescues, and wire-
less metropolitan area networks. However, it is a very challenging task to design efficient
networking protocols to provide quality of service (QoS) and reliability in these networks.
Compared to wired networks, in wireless networks links are not independent any more;
bandwidth, power and processing ability are limited; channel errors happen frequently;
network topology is subject to constant change; and the network is often self-organized
and distributed. These challenges lead to close coupling among various layers in the proto-
col stack and a complete different medium access control (MAC) layer, and hence call for
a cross-layer design between the MAC layer and other layers.
The dissertation conducts a thorough theoretical study on a contention-based MAC
standard, IEEE 802.11, and investigates the close coupling between various protocol layers
xviii
and the MAC layer, features of which have been used to provide QoS and reliability, and
design efficient MAC, routing and transport protocols. The theoretical results discover that
the contention-based IEEE 802.11 MAC standard can well support Quality of Service and
at the same time achieve maximum aggregate throughput by regulating the access traffic.
Guided by the theoretical studies, the protocol design demonstrates various novel ways of
cross-layer design and their great benefit in improving performance of wireless networks.
Unlike prior research on cross-layer design approach in wireless networks which focused
on pure theoretical studies and either are too complicated to solve or the resulting solutions
become too simple to be practical because of many unpractical assumptions, the theoretical
studies and protocol design in this dissertation are based on the widely used IEEE 802.11
standard and hence can achieve immediate impact on products and revolutionize the way
that people design networked systems.
xix
CHAPTER 1INTRODUCTION
1.1 Motivation
With a rapid development in wireless communication technologies and the prolifer-
ation of mobile communication and computing devices like cell phones, PDAs and lap-
tops, wireless local area Networks (WLANs) and mobile ad hoc networks (MANETs)
have emerged as important parts of the envisioned future ubiquitous communication. In
recent years, the IEEE 802.11 wireless LAN has been increasingly employed to access the
Internet because of its simple deployment and low cost. MANETs are finding a variety
of applications such as disaster rescue, battlefield communications, inimical environment
monitoring, and collaborative computing. The widely studied sensor networks are special
applications of MANETs.
However, there are a lot of challenges for the networking protocols to work efficiently
in WLANs and MANETs. Unlike wired networks, some unique characteristics of WLANs
and MANETs seriously deteriorate performance of the networking protocols. These char-
acteristics include the time-varying channels due to path loss, fading and interference, the
vulnerable shared media access due to random access collision and the limited battery en-
ergy. In MANETs, the network topology may experience continuous change and cause fre-
quent route breakages and re-routing activity. And MANETs by nature are self-organized,
self-controlled and distributed. In other words, there is no centralized controller that has
perfect knowledge of all the nodes in the network. Instead, each node can only have incom-
plete or sometimes skewed view of the network. As a result, it has to make decisions with
imperfect information. Due to all these hurdles posed by WLANs and MANETs, simple,
efficient, fair, and energy-efficient networking protocols, while highly desirable, are not an
easy task.
1
2
These challenges call for the cross-layer design of the networking protocols in WLANs
and MANETs. For example, by scheduling the node with good channel quality to access
the channel, medium access control (MAC) protocols can achieve higher throughput. The
traditional congestion control protocol for the Internet, TCP, takes any packet loss as a
congestion sign. However, packet loss can be attributed to poor channel quality or route
failure due to mobility. It can achieve better performance if the TCP source can differ-
entiate the different reasons of packet losses by obtaining information from the routing
protocols and the physical and MAC layers. Routing protocols can also avoid unnecessary
re-routing messages if they can distinguish the packet losses for medium collision instead
of mobility. As to quality of service (QoS) and fairness, they seem to be formidable tasks
considering the unreliable physical channel, medium collisions and dynamically changing
network topology and traffic load. Cross-layer design seems a must to provide node-based
and flow-based fairness and end-to-end QoS guarantee.
1.2 Organization of the Dissertation
In this dissertation, we first conduct performance analysis of the Distributed Coordi-
nation Function (DCF) protocol in the IEEE 802.11 MAC standard in Chapter2. We pro-
pose a new model using the signal transfer function of generalized state transfer diagram to
characterize the probability generation function of the medium access delay. With the prob-
ability distribution of medium access delay and queueing theory, most of the performance
metrics, such as throughput, delay, packet loss rate and various queue characteristics, can
be analyzed for the WLANs. Our results show that at the non-saturated state (i.e., each
node does not contend for the channel all the time and the total traffic rate does not exceed
the network capacity), the performance is dependent on the total traffic and almost indiffer-
ent to the number of transmitting stations. At the saturated state (i.e., each node has enough
traffic to keep contending the shared wireless channel), the number of transmitting stations
affects the performance more significantly.
3
In Chapter3, we further derive the maximum throughput of the IEEE 802.11 DCF
protocol and accurate estimates for delay and delay variations in wireless LANs based on
our work in Chapter2. We show that, by controlling the total traffic rate, the original 802.11
DCF protocol can support strict QoS requirements, such as those required by voice over IP
or streaming video, and at the same time, achieve a high channel utilization. The result is
a significant departure from most recent works which only support service differentiation
instead of QoS guarantee.
The studies in Chapter3also suggest a good metricchannel busyness ratioto represent
the network status, such as throughput, medium access delay and collision probability.
Just as the name implies, channel busyness ratio is a ratio of the time intervals when the
channel is busy due to successful transmissions and collisions to the total time. Based on
the physical carrier sensing and virtual carrier sensing mechanisms of the IEEE 802.11
standard, this metric is very easy to measure and only requires a few simple calculations
at the MAC layer. Hence it can be used to facilitate the regulation of total input traffic to
support QoS.
In Chapter4, we propose a call admission and rate control scheme to support QoS
guarantee in Wireless LANs. Based on the channel busyness ratio obtained at the MAC
layer, the call admission control algorithm is used to regulate the admission of real-time
and streaming traffic and the rate control algorithm to control the transmission rate of best
effort traffic. As a result, the real-time or streaming traffic is supported with statistical QoS
guarantee, and the best effort traffic can fully utilize the residual channel capacity left by
the real-time and streaming traffic.
In Chapter5, we further develop the scheme in Chapter4 into a comprehensive pro-
tocol. Fairness is a major focus of this chapter. We propose a novel three-phase control
mechanism to fairly and efficiently utilize network resource and guarantee a short medium
access delay. The protocol also integrates the three-phase control mechanism with a call
4
admission control scheme and a packet concatenation scheme into a single unified frame-
work to better support QoS and multiple channel rates besides the efficiency and fairness.
After we examine the performance of wireless LANs and propose a scheme to sup-
port QoS as well as high efficiency, we are wondering whether these techniques can be
applied to multihop case, i.e., MANETs. However, MANETs are much more complicated
than wireless LANs. There are a lot of new challenges, such as the infamous hidden and
exposed terminal problems. Before we come up with any designs, we must first understand
thoroughly what the problems are and how they impact the network performance.
In Chapter6, we study the impact of physical carrier sensing and virtual carrier sens-
ing mechanisms on the system performance of MANETs. A theoretical model is developed
to analyze the optimal carrier sensing range to maximize the system throughput when mul-
tiple discrete channel rates coexist in the network. We also study how to utilize the multi-
rate capability of the IEEE 802.11 standard, and which neighbor and channel rate should
be used for each hop transmission. A novel routing metric, bandwidth distance product, is
proposed to perform this task and it can greatly increase the system throughput.
In Chapter7, we first study the various problems of the medium access control if the
IEEE 802.11 DCF protocol is used, such as the hidden and exposed terminal problems,
receiver blocking problem and intra-flow and inter-flow contention problems. The studies
show that these problems not only impact the efficiency of the MAC protocol but also im-
pact the higher layers’ performance, such as unnecessary re-routing activities due to false
route failures and unfairness among multiple flows. Motivated by the analysis of these
problems, we propose a new dual-channel MAC protocol. The new MAC protocol uses
an out-of-band busy tone and two communication channels, one for control frames and the
other for data frames. The newly designed message exchange sequence provides a com-
prehensive solution to all the aforementioned problems. Extended simulations demonstrate
that our scheme provides a much more stable link layer, greatly improves the spatial reuse,
and works well in reducing the packet collisions. It improves the throughput by 8% to 28%
5
for one-hop flows and by 2∼5 times for multihop flows under heavy traffic comparing to
the IEEE 802.11 MAC protocol.
However, sometimes we only have one single channel and one single transceiver. In
this case we need to develop a new efficient MAC protocol other than DUCHA to address
those problems. Therefore, in Chapter8, we propose a complete single channel solution to
address both hidden and exposed terminal problems. The new solution inserts dummy bits
in the DATA frame and allows the receiver to transmit short busy advertisements during the
transmission time of the dummy bits to notify the hidden terminal of the ongoing transmis-
sion. Because the transmission of DATA frame is protected by the short busy advertisement
signals, we are able to significantly reduce the carrier sensing range to increase the spatial
reuse ratio, which noticeably mitigate the exposed terminal problem. We also demonstrate
that power control in the solution can further remarkably improve the system performance.
In Chapter9, we study how the physical layer information can be used at the MAC
layer to improve the system performance. We propose a new adaptive packet concatenation
(APC) scheme and demonstrate that APC can improve the system throughput by several
times in both WLANs and MANETs.
In Chapter10, we focus on the impact of routing metrics on the throughput of se-
lected paths in MANETs. Because MAC layer and Physical layer have a great impact on
the routing algorithm, considering the features of these two layers is a must in a good rout-
ing algorithm. We first perform a comprehensive study on the impact of multiple rates,
interference and packet loss rate together on the maximum end-to-end throughput or path
capacity. A theoretical model is derived to study the path capacity or the maximum end-to-
end throughput of selected paths with consideration of all those factors. We also propose
a new routing metric called interference clique transmission time to efficiently utilize the
information at physical and MAC layers to find good paths. Based on the proposed the-
oretical model, we evaluate the capability of various routing metrics including hop count,
expected transmission times, end-to-end transmission delay or medium time, link rate,
6
bandwidth distance product, interference clique transmission time, to find a path with high
throughput. The results show that interference clique transmission time is a better routing
metric than all the others.
In Chapter11, by carefully studying the intra-flow and inter-flow contention problems,
we find that network congestion is closely coupled with the medium access contentions.
Then we propose a framework of distributed flow control and medium access to mitigate
the MAC layer contentions, overcome the congestion and increase the throughput for traffic
flows across shared channel environments. The key idea is based on the observation that, in
the IEEE 802.11 MAC protocol, the maximum throughput for a standard chain topology is
1/4 of the channel bandwidth and its optimum packet scheduling is to allow simultaneous
transmissions at nodes which are four hops away. The proposed fully distributed scheme
generalizes this optimum scheduling to any traffic flow which may encounter intra-flow
and inter-flow contentions. Our scheme has been shown to perform better and achieve
higher throughput at light to heavy traffic load comparing to that when the original IEEE
802.11 MAC protocol is used. Moreover, our scheme also achieves much better and more
stable performance in terms of delay, fairness and scalability with low and stable control
overhead.
The proposed scheme in Chapter11 provides a good solution of congestion control
at the network and data link layers. However, to support end-to-end reliability required by
various services, such as web traffic and emails, end-to-end flow and congestion control is
also necessary. Chapter12studies the close coupling between TCP traffic and medium con-
tention and finds that the TCP sources are very greedy leading to severe network congestion
and medium collisions. And the window based congestion control algorithm becomes too
coarse in its granularity, causing throughput instability and excessively long delay. Based
on the novel use of channel busyness ratio, which we show in Chapter3 is an accurate sign
7
of the network utilization and congestion status, a new end-to-end congestion control pro-
tocol has been proposed to efficiently and fairly support the transport service in multihop
ad hoc networks.
Finally, Chapter13 discusses some future research issues including the fairness and
QoS support in MANETs.
CHAPTER 2PERFORMANCE OF THE IEEE 802.11 DCF PROTOCOL IN WIRELESS LANS
IEEE 802.11 MAC protocol is the de facto standard for wireless LANs, and has also
been implemented in many network simulation packages for wireless multi-hop ad hoc
networks. However, it is well known that, as the number of active stations increases, the
performance of IEEE 802.11 MAC in terms of delay and throughput degrades dramati-
cally, especially when each station’s load approaches to its saturation state. To explore
the inherent problems in this protocol, it is important to characterize the probability dis-
tribution of the packet service time at the MAC layer. In this chapter, by modeling the
exponential backoff process as a Markov chain, we can use the signal transfer function of
the generalized state transition diagram to derive an approximate probability distribution of
the MAC layer service time. We then present the discrete probability distribution for MAC
layer packet service time, which is shown to accurately match the simulation data from
network simulations. Based on the probability model for the MAC layer service time, we
can analyze a few performance metrics of the wireless LAN and give better explanation to
the performance degradation in delay and throughput at various traffic loads. Furthermore,
we demonstrate that the exponential distribution is a good approximation model for the
MAC layer service time for the queueing analysis, and the presented queueing models can
accurately match the simulation data obtained from ns-2 when the arrival process at MAC
layer is Poissonian.
2.1 Introduction
The Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol
used in the IEEE 802.11 MAC protocol has been proposed as the standard protocol for
wireless local area networks (LANs), which has also been widely implemented in many
wireless testbeds and simulation packages for wireless multi-hop ad hoc networks.
8
9
However, there are many problems encountered in the higher protocol layers in IEEE
802.11 wireless networks. It has been observed that the packet delay increases dramati-
cally when the number of active stations increases. Packets may be dropped either due to
the buffer overflow or because of serious MAC layer contentions. Such packet losses may
affect high layer networking schemes such as the TCP congestion control and networking
routing maintenance. The routing simulations [19, 108] over mobile ad hoc networks indi-
cate that network capacity is poorly utilized in terms of throughput and packet delay when
the IEEE 802.11 MAC protocol is integrated with routing algorithms. TCP in the wireless
ad hoc networks is unstable and has poor throughput due to TCP’s inability to recognize the
difference between the link failure and the congestion. Besides, one TCP connection from
one-hop neighbors may capture the entire bandwidth, leading to the one-hop unfairness
problem [64, 140, 46, 110].
Performance analysis for the IEEE 802.11 MAC protocol could help to discover the
inherent cause of the above problems and may suggest possible solutions. Many papers
on this topic have been published [20, 22, 15, 44, 58, 134, 85]. Cali [20, 22] derived
the protocol capacity of the IEEE 802.11 MAC protocol and presented an adaptive back-
off mechanism to replace the exponential backoff mechanism. Bianchi [15] proposed a
Markov chain model for the binary exponential backoff procedure to analyze and compute
the IEEE 802.11 DCF saturated throughput. All of these papers assume the saturated sce-
nario where all stations always have data to transmit. Based on the saturated throughput
in Bianchi’s model, Foh and Zuckerman presented the analysis of the mean packet delay
at different throughput for IEEE 802.11 MAC [44]. Hadzi-Velkov and Spasenovski also
gave an analysis for the throughput and mean packet delay in the saturated case by incor-
porating frame-error rates [58]. Kim and Hou [85] analyzed the protocol capacity of IEEE
802.11 MAC with the assumption that the number of active stations having packets ready
for transmission is large.
10
To the authors’ best knowledge, there is no comprehensive study on the queue dy-
namics of the IEEE 802.11 wireless LANs. The delay analysis is limited to the derivation
of mean value while the higher moments and the probability distribution function of the
delay are untouched. Most of the current papers focused on the performance analysis in
saturated traffic scenarios and the comprehensive performance study under non-saturated
traffic situations is still open.
In this chapter, to address the above issues, we first characterize the probability dis-
tribution of theMAC layer packet service time(i.e., the time interval between the time
instant a packet starts to contend for transmission and the time instant that the packet either
is acknowledged for correct reception by the intended receiver or is dropped). Based on
the probability distribution model of the MAC layer packet service time, we then study
the queueing performance of the wireless LANs at different traffic load based on the IEEE
802.11 MAC protocol. Then, we evaluate the accuracy of the exponential probability distri-
bution model for the MAC layer service time in queueing analysis through both analytical
approach and simulations.
2.2 Preliminaries
2.2.1 Distributed Coordination Function (DCF)
Before we present our analysis for 802.11 MAC, we first briefly describe the main
procedures in the DCF of 802.11 MAC protocol [68]. In the DCF protocol, a station
shall ensure that the medium is idle before attempting to transmit. It selects a random
backoff interval less than or equal to the current contention window (CW) size based on
the uniform distribution, and then decreases the backoff timer by one at each time slot
when the medium is idle (may wait for DIFS followed a successful transmission or EIFS
followed a collision). If the medium is determined to be busy, the station will suspend
its backoff timer until the end of the current transmission. Transmission shall commence
whenever the backoff timer reaches zero. When there are collisions during the transmission
or when the transmission fails, the station invokes the backoff procedure. To begin the
11
backoff procedure, the contention window size CW, which takes an initial value of CWmin,
doubles its value before it reaches a maximum upper limit CWmax, and remains the value
CWmax when it is reached until it is reset. Then, the station sets its backoff timer to a
random number uniformly distributed over the interval [0, CW) and attempts to retransmit
when the backoff timer reaches zero again. If the maximum transmission failure limit is
reached, the retransmission shall stop, CW shall be reset to CWmin, and the packet shall
be discarded [68]. The RTS/CTS mechanisms and basic access mechanism of IEEE 802.11
are shown in Fig.2–1.
���� ���� ����
�������
��� ����
���
������� ����������������������
����
���
����
����
����
���
���������
������������������� ��������������!
������� "�����������������
��#���������!
���$����� ��������������!
�������� ��#����������!
������� ��������� �������
���� ���� ����
�������
��� ���� ������� ����������������������
����
����
���
����������
������������������� ��������������!
������� "�����������������
��#���������!
���$����� ��������������!
�������� ��#����������!
������� ��������� �������
Figure 2–1:RTS/CTS mechanism and basic access mechanism of IEEE 802.11
2.2.2 System Modeling
Each mobile station is modeled as a queueing system, which can be characterized by
the arrival process and the service time distribution. And the saturated status is reached if
each station has heavy traffic and always has packets to transmit. The non-saturated status,
i.e., under light or moderate traffic load, could be characterized by the non-zero probability
that the queue length is zero.
The service time of the queueing system is the MAC layer packet service time defined
in Section2.1. The IEEE 802.11 MAC adopts the binary exponential backoff mechanism
for the transmission of each packet, which may collide with some other transmissions in
12
the air at each transmission attempt. And the collision probabilitypc is determined by the
probability that there is at least one of other stations which will transmit at the same backoff
time slot when the considered station attempts transmission. We assume that this probabil-
ity does not change and is independent during the transmission of each packet regardless of
the number of retransmission suffered. For the saturated case, this approximation has been
used by Bianchi [15] to derive the saturated throughput. And for the non-saturated case,
the collision probability becomes more complex. It depends on the number of stations with
packets ready for transmission and the backoff states of these stations. Between two trans-
mission attempts at the considered station, other stations may complete several successful
transmissions and/or encounter several collisions, and there may be new packet arrivals at
stations no matter whether they are previously contending for transmission or not. Intu-
itively, this approximation becomes more accurate when the number of stations gets larger
for both saturated and non-saturated case. For simplicity, we use the same approximation
for both cases and argue that the collision probability does not change significantly as long
as the input traffic rate from higher layer at each station are still the same during the ser-
vice for each packet. Then we could model the binary exponential backoff mechanism as
a Markov chain and make possible the derivation of the probability distribution of service
time in the next section. Later in this chapter, we will show that the analytical results from
this approximation are consistent with the simulation results very well at the non-saturated
case.
2.3 The Probability Distribution of the MAC Layer Service Time
2.3.1 MAC Layer Service Time
As described in section2.2, there are three basic processes when the MAC layer trans-
mits a packet: the decrement process of the backoff timer, the successful packet transmis-
sion process that takes a time period ofTsuc and the packet collision process that takes
a time period ofTcol. Here,Tsuc is the random variable representing the period that the
13
medium is sensed busy because of a successful transmission, andTcol is the random vari-
able representing the period that the medium is sensed busy by each station due to colli-
sions.
The MAC layer service time is the time interval from the time instant that a packet
becomes the head of the queue and starts to contend for transmission to the time instant that
either the packet is acknowledged for a successful transmission or the packet is dropped.
This time is important when we examine the performance of higher protocol layers. Appar-
ently, the distribution of the MAC layer service time is a discrete probability distribution
because the smallest time unit of the backoff timer is a time slot.Tsuc andTcol depend
on the transmission rate, the length of the packet and the overhead (with a discrete unit,
i.e., bit), and the specific transmission scheme (the basic access DATA/ACK scheme or the
RTS/CTS scheme) [15, 68].
2.3.2 Probability Generating Functions (PGF) of MAC Layer Service Time
The MAC layer service time is a non-negative random variable denoted by random
variableTS, which has a discrete probability ofpi for TS beingtsi with the unit of one-bit
transmission time or the smallest system clock unit,i=0,1,2,. . . . The PGF ofTS is given
by
PTS(Z) =
∑∞i=0
piZtsi = p0Z
ts0 + p1Zts1 + p2Z
ts2 + ... (2.1)
and completely characterizes the discrete probability distribution ofTS , and has a few
important properties as follows:
PTS(1) = 1
E[TS] = ∂∂Z
PTS(Z)
∣∣Z=1
= P ′TS
(1)
V AR[X] = P′′TS
(1) + P ′TS
(1)− {P ′TS
(1)}2
(2.2)
where the prime indicates the derivative.
14
To derive the PGF of the MAC layer service time, we will model the transmission
process of each packet as a Markov chain in the following subsections. Here we first
discuss how to drive the PGF of the service time from the Markov chain.
The state when the packet leaves the mobile station, i.e., being successfully transmit-
ted or dropped, is the absorption state of the Markov chain for the backoff mechanism. To
obtain the average transition time to the absorption state of the Markov chain, we can use
the matrix geometric approach. However, in the case of Markov Chain forTS with various
transition times on different branches, it requires a new matrix formulation to accommo-
date different transition times, and its solution always accompanies extraneous complicated
computations [30]. Here, we apply the generalized state transition diagram, from which we
can easily derive the PGF ofTS and obtain arbitrarynth moment ofTS.
In the generalized state transition diagram, we mark the transition time on each branch
along with the transition probability in the state transition diagram (the Markov chain). The
transition time, which is the duration for the state transition to take place, is expressed as
an exponent of Z variable in each branch. Thus, the probability generating function of total
transition time can be obtained from the signal transfer function of the generalized state
transition diagram using the well-known Mason formula [30, 112].
To illustrate how the generalized Markov chain model works, we show one simple
example for a MAC mechanism that allows infinite retransmissions for each packet without
any backoff mechanisms. If the random variable F is defined as the duration of time taken
for a state transition from the state “1” to “2” in Fig.2–2, its PGF is simply the signal
transfer function of the state transition. In Fig.2–2, p is the collision probability,1 − p
is the successfully transmitted probability,τ 1is the collision time, andτ 2 is the successful
transmission time. So the PGF of random variable F is
PF (Z) =(1− p)Zτ2
1− pZτ2(2.3)
15
This satisfies Equation2.2, that is,PF (1) = 1 and its mean transition time is
P ′F (1) =
p
1− pτ1 + τ2 (2.4)
� �
2(1 )p Zτ−
1pZ τ
Figure 2–2:Generalized state transition diagram of one example
On the other hand, we can easily obtain the average collision/retransmission times
NC , i.e.,p/(1 − p). Thus the average transition time can be directly obtained asNC ×τ1 + τ2, which is the same as Equation2.4.
2.3.3 The Processes of Collision and Successful Transmission
We first study the RTS/CTS mechanisms. As shown in Fig.2–1, the period of suc-
cessful transmissionTsuc equals to
Tsuc = RTS + CTS + DATA + ACK + 3SIFS + DIFS (2.5)
And the period of collisionTcol equals to
Tcol = RTS + SIFS + ACK + DIFS = RTS + EIFS (2.6)
Tcol is a fixed value and its PGFCt(Z) equals
Ct(Z) = ZRTS+EIFS (2.7)
Tsuc is a random variable determined by the distribution of packet length. In the case
that the length of DATA has a uniform distribution in [lmin, lmax], its PGFSt(Z) equals
St(Z) = ZRTS+CTS+ACK+3SIFS+DIFS 1
lmax − lmin + 1
lmax∑
i=lmin
Zi (2.8)
16
In the case that the length of DATA is a fixed valuelD, its PGFSt(Z) equals
St(Z) = ZRTS+CTS+lD+ACK+3SIFS+DIFS (2.9)
If the basic scheme is adopted,Tcol is determined by the longest one of the collided
packets. When the probability of three or more packets simultaneously colliding is ne-
glected, its probability distribution can be approximated by the following equation,
Pr{Tcol = i} = Pr{l1 = i, l2 6 i}+ Pr{l2 = i, l1 6 i} − Pr{l1 = i, l2 = i} (2.10)
whereli(i = 1, 2) is the packet length of theith collided packet. Thus we could obtain that
Ct(Z) ≈ ZEIFS 1
(lmax − lmin + 1)2
lmax∑
i=lmin
(2i− 2lmin + 1)Zi (2.11)
St(Z) = ZSIFS+ACK+DIFS 1
lmax − lmin + 1
lmax∑
i=lmin
Zi (2.12)
for the case that the length of DATA has a uniform distribution in [lmin, lmax], or
Ct(Z) = Z lD+EIFS (2.13)
St(Z) = Z lD+SIFS+ACK+DIFS (2.14)
for the case that the length of DATA is a fixed valuelD.
2.3.4 Decrement Process of Backoff Timer
In the backoff process, if the medium is idle, the backoff timer will decrease by one for
every idle slot detected. When detecting an ongoing successful transmission, the backoff
timer will be suspended and deferred a time period ofTsuc, while if there are collisions
among the stations, the deferring time will beTcol.
As mentioned in section2.2, pc is the probability of a collision seen by a packet
being transmitted on the medium. Assuming that there are n stations in the wireless LAN
we are considering and packet arrival processes at all the stations are independent and
identically distributed, we observe thatpc is also the probability that there is at least one
17
packet transmission in the medium among other (n-1) stations in the interference range of
the station under consideration. This yields
pc = 1− [1− (1− p0)τ)]n−1 (2.15)
wherep0 is the probability that there are no packets ready to transmit at the MAC layer in
the wireless station under consideration, andτ is the packet transmission probability that
the station transmits in a randomly chosen slot time given that the station has packets to
transmit.
Let Psuc be the probability that there is one successful transmission among other (n-1)
stations in the considered slot time given that the current station does not transmit. Then,
Psuc =
n− 1
1
(1− p0)τ(1− (1− p0)τ)(n−2) = (n− 1)((1− pc)
(n−2)/(n−1) + pc− 1)
(2.16)
Thenpc – Psuc is the probability that there are collisions among other (n-1) stations
(or neighbors).
Thus, the backoff timer has the probability of 1-pc to decrement by 1 after an empty
slot timeσ, the probabilityPsuc to stay at the original state afterTsuc, and the probability of
pc – Psuc to stay at the original state afterTcol. So the decrement process of backoff timer
is a Markov process. The signal transfer function of its generalized state transition diagram
is
Hd(Z) =(1− pc)Z
σ
[1− PsucSt(Z)− (pc − Psuc)Ct(Z)](2.17)
From above formula, we observe thatHd(Z) is a function ofpc, the number of stations
n and the dummy variableZ.
2.3.5 Markov Chain Model for the Exponential Backoff Procedure
Whenever the backoff timer reaches zero, transmission shall commence. According to
the definition ofpc, the station has the probability 1-pc to finish the transmission afterTsuc,
and the probabilitypc to double contention window size and enter a new backoff procedure
18
until the maximum retransmission limit is reached afterTcol. Since the decrement process
of backoff timer is a Markov process as discussed above, the whole exponential backoff
procedure is also a Markov process.
Let W be the minimum value of contention window size CWmin plus 1. Following a
similar procedure used by Bianchi [15] and noticing that the transition probability at each
branch of the Markov chain is different from there (which only denoted the value at the
saturated status and did not consider that the contention window is reset after the maximum
α times of retransmissions as defined in the protocols [68], we can obtain (please refer to
Section2.3.8)
τ =
2(1−pα+1c )
1−pα+1c +(1−pc)W (
Pαi=0 (2pc)i)
2(1−pα+1c )
1−pα+1c +pcW
Pm−1i=0 (2pc)i+W (1−2mpα+1
c )
, α 6 m
, α > m
(2.18)
where m is the maximum number of the stages allowed in the exponential backoff pro-
cedure (the definition is clarified below). We will use Equations (2.15) and (2.18) in the
queueing analysis to derive the collision probability at different input traffic in Section2.4.
2.3.6 Generalized State Transition Diagram
Now, it is possible to draw the generalized state transition diagram for the packet
transmission process as shown in Fig.2–3. In Fig. 2–3, {s(t), b(t)} is the state of the bi-
dimensional discrete-time Markov chain, where b(t) is the stochastic process representing
the backoff timer count for a given station, and s(t) is the stochastic process representing
the backoff stage with values (0, ...,α) for the station at time t. Let m be the “maximum
backoff stage” at which the contention window size takes the maximum value, i.e., CWmax
= 2m(CWmin + 1) - 1. At different “backoff stage” i∈ [0, α], the contention window size
19
CWi1 = Wi - 1, where Wi = 2i(CWmin + 1) if 06 i 6 m, and Wi = CWmax + 1 if m6 i
6 α.
0,0 0,1 0,2 0,W0-2 0,W0-1Hd(Z) Hd(Z) Hd(Z)
(pc/W1)Ct(Z)
1/W0
i,0 i,1 i,2 i,Wi-2 i,Wi-1Hd(Z) Hd(Z) Hd(Z)
(pc/Wi+1)Ct(Z)
i-1,0
(pc/Wi)Ct(Z)
α,0 α,1 α,2 α,Wα-2 α,Wα-1Hd(Z) Hd(Z) Hd(Z)
(pc)Ct(Z)
(pc/Wα)Ct(Z)
(1-pc)St(Z)
(1-pc)St(Z)
(1-pc)St(Z)
(1-pc)St(Z)
start
end
Figure 2–3:Generalized state transition diagram for transmission process
As we defined before, the random variableTS is the duration of time taken for a state
transition from the start state (beginning to be served) to the end state (being transmitted
successfully or discarded after maximumα times retransmission failures). Thus, its Prob-
ability Generating Function (PGF), denoted asB(Z) that is the function ofpc, n andZ, is
1 The set of CW values shall be sequentially ascending integer power of 2, minus 1,beginning with CWmin, and continuing up to and including CWmax. [68]
20
simply the signal transfer function from the start state to the end state given by:
HWi(Z) =
∑2iW−1j=0 Hj
d(Z)/(2iW ), (0 6 i 6 m)
HWm(Z), (m < i 6 α)
Hi(Z) =∏i
j=0HWj(Z), (0 6 i 6 α)
B(Z) = (1− pc)St(Z)α∑
i=0
(pcCt(Z))iHi(Z) + (pcCt(Z))α+1Hα(Z)
(2.19)
SinceB(Z) can be expanded in power series, i.e.,
B(Z) =∑∞
i=0Pr(Ts = i)Zi (2.20)
we can obtain the arbitrarynth moment ofTS by differentiation (hence the mean value and
the variance), where the unit ofTS is slot. For example, the mean is given by
µ−1 = E[TS] =dB(Z)
dZ|Z=1 (2.21)
whereµ is the MAC layer service rate.
2.3.7 Probability Distribution Modeling
From the probability generation function (PGF) of the MAC layer service time, we
can easily obtain the discrete probability distribution. Fig.2–4shows the probability dis-
tribution of the MAC service time at each discrete value. This example uses RTS/CTS
mechanisms. The lengths of RTS/CTS/ACK conform to IEEE 802.11 MAC protocol. Data
packet length is 1000 bytes and data transmission rate is 2 Mbps. The values of the para-
meters are summarized in Table I.
We notice that the envelope of the probability distribution is similar to an exponential
distribution. If we use some continuous distribution to approximate the discrete one, it
will give us great convenience to analyze the queueing characteristics. Fig.2–4shows the
approximate probability density distribution (PDF) ofTS and several well-known continu-
ous PDFs including Gamma distribution, log-normal distribution, exponential distribution
21
(a) (b)
(c) (d)
(e) (f)
Figure 2–4:Probability distribution of MAC layer service time
22
Table 2–1:IEEE 802.11 system parameters
Channel Bit Rate 2 Mbit/sPHY header 192 bitsMAC header 224 bitsPacket payload size 1000BytesLength of RTS 160bits + PHY headerLength of CTS 112bits + PHY headerLength of ACK 112bits + PHY headerInitial backoff window size(W)
31
Maximum backoff stages (m)5Short retry limit 7Long retry limit 4
and Erlang-2 distribution. We observe that the log-normal distribution provides a good
approximation for almost all cases (not only for cases at the high collision probability but
also for cases at the low collision probability), and also has a very close tail distribution
match with that ofTS. In addition, the exponential distribution seems to provide a rea-
sonably good approximation except for cases at very low collision probability, where it is
more like a deterministic distribution. Here, the PDF ofTS is obtained by assuming that
the probability density function is uniform in a very small interval and is represented by a
histogram while other continuous PDF is determined by the value of mean and/or variance
of TS. Here, we use 5 ms as the interval in the histogram because the distribution of the
delay concentrates around the integer times of the successful transmission period for each
packet which approximates 5 ms for packets with 1000 bytes long.
We also notice thatpc has different saturation values for differentn. If the mobile
station always has packets to transmit, i.e., in the saturation state, the idle probabilityp0
takes the minimum value 0. So, according to formulae (2.15) and (2.18), we can obtain the
saturation value ofpc by settingp0 as 0 in Table II.
Table 2–2:Saturation value of collision probability
n 5 9 17 33 65Max pc 0.1781 0.2727 0.3739 0.4730 0.5692
23
Figure 2–5:PDF of service time
Fig. 2–5 shows the distribution ofTS at different number of mobile stations, which
mainly depends onpc and hardly depends onn. Fig. 2–6shows the mean value ofTS at
different collision probability. The maximum ofTS for different n, which is reached when
pc takes the saturation value, is marked. We observe that the distribution of TS mainly
depends on pc and is determined by the number of the active stations at saturation status
when pc reaches the saturation value. We will discuss how to obtain the value of pc at
different traffic load in the following section.
Figure 2–6:Mean of service time
2.3.8 Derivation of Transmission Probability
This section derives the transmission probabilityτ , i.e., the packet transmission prob-
ability that the station transmits in a randomly chosen slot time given that it has packets to
24
transmit. We follow the similar notations in the paper [15]. {s(t), b(t)} andWi have been
defined in section2.4.6. F. LetP{i1, k1|i0, k0} be the short notation of one-step transition
probability andP{i1, k1|i0, k0} = Pr{s(t + 1) = i1, b(t + 1) = k1|s(t) = i0, b(t) = k0}.Then the only non null one-step transition probabilities are
P{i, k|i, k + 1} = 1 k ∈ [0, Wi − 2] i ∈ [0, α]
P{0, k|i, 0} = (1− pc)/W0 k ∈ [0,W0 − 1] i ∈ [0, α− 1]
P{i, k|i− 1, 0} = pc/Wi k ∈ [0,Wi − 1] i ∈ [1, α]
P{0, k|α, 0} = 1/W0 k ∈ [0,W0 − 1]
(2.22)
These equations account for the facts that: the backoff timer is decremented; the back-
off timer starts from stage 0 after a successful transmission; the backoff timer starts from a
new stage after an unsuccessful transmission; the contention window size is reset and the
backoff timer starts from stage 0 when the maximum transmission failure limit is reached,
respectively.
Let bi,k = limt→∞
Pr{s(t) = i, b(t) = k}, 0 6 i 6 α, 0 6 k < Wi be the stationary
distribution of the Markov chain. First, note that
bi−1,0 · pc = bi,0 → bi,0 = picb0,0 0 < i 6 α (2.23)
and
bi,k =Wi − k
Wi
×
bα,0 + (1− pc)∑α−1
0 bj,0 i = 0
pc · bi−1,0 0 < i 6 α(2.24)
By means of equation (2.23), equation (2.24) can be simplified as
bi,k =Wi − k
Wi
bi,0 (0 6 i 6 α, 0 6 k 6 Wi − 1) (2.25)
25
Thus,b0,0 can be finally determined by imposing the normalization condition, which
simplifies as follows:
1 =α∑
i=0
Wi−1∑k=0
bi,k =α∑
i=0
bi,0
Wi−1∑k=0
Wi−kWi
=α∑
i=0
bi,0Wi+1
2= b0,0
2
α∑i=0
pic
Wi+12
= b0,0
2
α∑i=0
pic2iW+1
2α 6 m
m−1∑i=0
pic2iW+1
2+
α∑i=m
pic2mWi+1
2α > m
(2.26)
As any transmission occurs when the backoff time counter equals zero, regardless of
the backoff stage, the probabilityτ that a station, which has packets to transmit, transmits
in a randomly chosen slot time is
τ =α∑
i=0
bi,0 =1− pα+1
c
1− pc
b0,0 (2.27)
which can be simplified as
τ =
2(1−pα+1c )
1−pα+1c +(1−pc)W (
Pαi=0 (2pc)i)
2(1−pα+1c )
1−pα+1c +pcW
Pm−1i=0 (2pc)i+W (1−2mpα+1
c )
, α 6 m
, α > m
(2.28)
2.4 Queueing Modeling and Analysis
2.4.1 Problem formulation
Many applications are sensitive to end-to-end delay and queue characteristics such as
average queue length, waiting time, queue blocking probability, service time, and goodput.
Thus, it is necessary to investigate the queueing modeling and analysis for wireless LANs
to obtain such performance metrics.
A queue model can be characterized by the arrival process and the service time distri-
bution with certain service discipline. We have characterized the MAC layer service time
distribution in the previous section. In this chapter, we assume that the packet arrivals at
each mobile station follow the Poisson process or a deterministic distribution with average
arrival rateλ. The packet transmission process at each station can be modeled as a general
26
single “server”. The buffer size at each station is K. Thus, the queueing model for each
station can be modeled as an M/G/1/K when Poisson arrivals of packets are assumed.
2.4.2 The steady-state probability of the M/G/1/K queue
Let pn represent the steady-state probability ofn packets in the queueing system, and
let πnrepresent the probability ofn packets in the queueing system upon a departure at the
steady state, and letP={pij} represent the queue transition probability matrix:
pij = Pr{Xn+1 = j|Xn = i} (2.29)
whereXn denotes the number of packets seen upon the nth departure.
To obtainpij, we define
kn = Pr{narrivals during service timeTs} =∞∑i=0
e−λi(λi)n
n!Pr{Ts = i}
whereλ is the average arrival rate. We can easily obtain
P = {pij} =
k0 k1 k2 · · · kK−2 1−∑K−2n=0 kn
k0 k1 k2 · · · kK−2 1−∑K−2n=0 kn
0 k0 k1 · · · kK−3 1−∑K−3n=0 kn
......
......
...
0 0 0 · · · k0 1− k0
(2.30)
Moreover, we notice that
k0 = B(e−λ), kn =λn
(−1)nn!
∂nB(e−λ)
∂λn(2.31)
whereB(e−λ) is obtained by replacingZ with e−λ in equation (2.19), i.e., the PGF of the
MAC layer service timeTs.
According to the balance equation:
πP = π (2.32)
27
whereπ ={πn } and the normalization equation, we can compute theπ. For the finite
system size K with Poisson input, we have [53]
p0 =π0
π0 + ρ, pn =
πn
π0 + ρ(0 6 n 6 K − 1), pK = 1− 1
π0 + ρ(2.33)
whereρ is the traffic intensity andρ = λE[TS].
If we can approximate the distribution of MAC service time by an exponential distri-
bution, the steady-state probability for the M/M/1/K model [53] is given by:
p0 = [∑K
i=0ρi]−1, pi = (ρ)ip0, (0 6 i 6 K) (2.34)
2.4.3 Conditional Collision Probability pc and Distribution of MAC Layer ServiceTime
From above derivation, we know thatp0 is a function ofpc, λ, andn. So we can
computepc under different values ofλ andn with the help of (2.15) and (2.18) using some
recursive algorithm. Thus, we can obtain the distribution of MAC service time at different
offered load according to the results obtained in section2.4.6. Here we assume that the
packet arrival process at each station is independent and identical distributed, and hence
we could obtain the aggregate performance of wireless LAN from the queueing analysis in
this section.
2.4.4 Performance Metrics of the Queueing System
The average queue length, blocking probability, and average waiting time including
MAC service time are given by
L =K∑
i=0
i× pi, pB = pk = 1− 1
π0 + ρ, W =
L
λ(1− PB)(2.35)
2.4.5 Throughput
If we know the blocking probabilitypB, then the throughputS at each station can be
computed easily by
S = λ(1− pB)(1− pα+1c ) (2.36)
28
wherepα+1c is the packet discard probability due to transmission failures.
2.4.6 Numerical Results
Fig. 2–7 shows the results for the major performance metrics. All of them have
a dramatic change around the traffic load of 1.1-1.5 Mbits/sec. This is because that the
collisions increase significantly around this traffic load, resulting in much longer MAC
service time for each packet.
Figure 2–7:Queue characteristics
From the results, we observe that all the metrics are dependent on the collision prob-
ability pc. Fig. 2–7shows thatpc mainly depends on the total traffic in the non-saturated
scenario. On the other hand,pc is affected by the total number of packets attempting to
transmit by all neighboring stations. In the non-saturated case, when all arriving packets
are immediately served by the MAC layer, the queue length may reach zero and the cor-
responding station will not compete for the medium. However, in the saturated scenario,
i.e., the stations always have packets to transmit, the total number of packets attempting to
transmit equals to the total number of neighboring stations, hencepc is mainly dependent
on the total number of neighboring stations as we expect.
29
The MAC layer service time shows similar change at different offered load, because
it is dependent on thepc. All other performance metrics are dependent on the distribution
of the MAC layer service time, so they show the similar change in the figures. The average
queue length is almost zero at the non-saturated state and reaches almost maximum length
at the saturated state. The average waiting time for each packet in the queue almost equals
to zero at the non-saturated state and reaches several seconds at the saturated state. The
queue blocking probability is zero at the non-saturated state when the traffic load is low,
and linearly increases with the offered load at the saturated state. The throughput linearly
increases with the offered load at the non-saturated state and maintains a constant value
with different total number of transmitting stations at the saturated state. The throughput at
saturated state decreases when the number of stations increases because collision probabil-
ity climbs up with the number of stations. This is consistent with the results of saturation
throughput found by Bianchi [15] where the author indicates that the saturated throughput
decreases asn increases under a small initial size of the backoff window given a specific
set of system parameters. In addition, the packet discarding probability at MAC layer is
much smaller than the queue blocking probability.
In summary, all these results indicate that IEEE 802.11 MAC works well in the non-
saturated state at low traffic load while its performance dramatically degrades at the sat-
urated state, especially for the delay metric. Besides, at the non-saturated state, the per-
formance is dependent on the total traffic and indifferent to the number of transmitting
stations. At the saturated state, the number of transmitting stations is much more important
to the whole performance. The similar phenomena have been observed for the distribution
of MAC service time shown in section .
2.5 Performance Evaluation
2.5.1 Simulation Environments
In our simulation study, we use the ns-2 package [41]. The wireless channel capacity is
set to 2Mbps. Data packet length is 1000 bytes, and the maximum queue length is 50. The
30
radio propagation model is Two-Ray Ground model. We use different numbers of mobile
stations in a rectangular grid with dimension 150m x 150m to simulate the Wireless LAN.
All stations have the same rate of packet inputs. The MAC protocol uses the RTS/CTS
based 802.11 MAC and other parameters are summarized in Table I.
2.5.2 Probability Distribution of MAC Layer Service Time
Fig. 2–8shows the simulation results of the MAC layer service time in the network
with 17 mobile stations and total traffic of 0.2, 0.8 and 1.6 Mbps, respectively. It displays
good match on the probability density functions between the analytical result and that from
simulation. Notice that, similarly with Fig.2–4, the PDFs shown in Fig.2–8are histogram
approximations of the discrete probability distribution obtained from both analysis and
simulations.
Figure 2–8:MAC layer packet service time
Our results indicate the distribution of MAC layer service time is independent of the
packet input distribution whether it is deterministic or Poisson distributed. It mainly de-
pends on the total traffic in the network before saturation and on the number of mobile
stations after saturation, which is consistent with the analysis.
2.5.3 Comparison of M/G/1/K and M/M/1/K Approximations with Simulation Re-sults
Exponential distribution is a memoryless distribution. If we can model the MAC layer
service time as this distribution, it will give us great convenience to predict the system
31
performance, such as throughput, link delay, packet discarding ratio. The problem is how
good this approximation is for our modeling.
As we said in section2.4, the exponential distribution seems to be a good approxima-
tion for the MAC layer service time. In Fig.2–9and2–10, we compare it with the derived
discrete probability distribution in the queueing analysis to check its goodness to predict
the MAC throughput, packet waiting time, queue blocking probability and average queue
length. Here, we assume that the system has Poisson arrivals. We use two queueing models
for these two distributions: M/M/1/K and M/G/1/K. Fig.2–9and2–10show the results for
the WLAN with 9 mobile stations.
(a) (b)
(c) (d)
Figure 2–9:Comparisons between M/G/1/K, M/M/1/K models and simulation
32
Figure 2–10:Average waiting time in non-saturated status
From Fig.2–9and2–10, we observe that M/M/1/K model give a close approximation
to the M/G/1/K model for some performance metrics. Both models have almost the same
throughput and queue blocking probability. However, when the mobile stations are at the
saturated state, M/M/1/K gives a large prediction error for the average queue length and
average waiting time, and the difference is small except at the turning point between non-
saturated state and the saturated state, where a dramatic change of the system performance
is shown. The M/G/1/K model always provides better approximation for all performance
metrics.
We also compare the results of queueing models with the simulation in Fig.2–9
and 2–10. Two queueing models show very close approximations with the simulation
results for all performance metrics when mobile stations are in the non-saturated state.
However, there are distinct differences between them when the system is in the saturation
state. This is because that the Markov chain model overestimates the average MAC layer
service time about 10 % in the saturation state compared to the simulation results from
ns-2, as showed in Fig.2–11. The reasons may be that the Markov chain model does not
capture all the protocol details and/or the implementation considerations of IEEE 802.11
MAC protocols in ns-2. Thus, the simulation results have higher throughput, lower queue
blocking probability, smaller average queue length and smaller average waiting time at
saturated state.
33
Figure 2–11:Average MAC layer service time
With extensive simulations for different number of mobile stations in randomly gen-
erated wireless LANs, we have concluded that the Markov chain models seem to always
give an upper bound of the average MAC layer service time. Thus, the queueing models
using the distribution of the service time give a lower bound of the throughput, and upper
bounds of queueing blocking probability, average queue length and average waiting time
compared with simulations of ns-2. Therefore, our analytical models can always be useful
to come up with the performance estimates for design purpose.
2.6 Conclusions
In this chapter, we have derived the probability distribution of the MAC layer service
time. To obtain this distribution, we use the signal transfer function of generalized state
transition diagram and expand the Markov chain model to the more general case for the ex-
ponential backoff procedure in IEEE 802.11 MAC protocols. Accurate discrete probability
distribution and approximate continuous probability distributions are obtained in this chap-
ter. Based on the distribution of the MAC layer service time, we come up with a queueing
model and evaluate the performance of the IEEE 802.11 MAC protocol in Wireless LANs
in terms of throughput, delay, and other queue performance metrics. Our results show that
at the non-saturated state, the performance is dependent on the total traffic and indifferent
34
to the number of transmitting stations, and at saturated state, the number of transmitting
stations affects the performance more significantly.
In addition, the analytical results indicate that exponential distribution may provide a
good approximation for the MAC layer service time in the queueing analysis. The queueing
models discussed in this chapter can accurately estimate various performance metrics of
WLAN in the non-saturated state which is the desired state for some application with a
certain QoS requirement because there is no excessive queueing delay as that in saturated
state. And for WLANs in the saturated state, the queueing models give a lower bound for
the throughput, and upper bounds for queueing blocking probability, average queue length
and average waiting time compared with simulation results obtained from ns-2.
CHAPTER 3HOW WELL CAN THE IEEE 802.11 DCF PROTOCOL SUPPORT QOS IN
WIRELESS LANS
This chapter studies an important problem in the IEEE 802.11 DCF based wireless
LAN: how well can the network support quality of service (QoS). Specifically, we analyze
the network’s performance in terms of maximum protocol capacity or throughput, delay,
and packet loss rate. Although the performance of the 802.11 protocol, such as through-
put or delay, has been extensively studied in the saturated case, we demonstrate that the
maximum protocol capacity can only be achieved in the non-saturated case, and is almost
independent of the number of active nodes. By analyzing the packet delay, consisting of
the MAC service time and waiting time, we derive accurate estimates for delay and delay
variation when the throughput increases from zero to the maximum value. Packet loss rate
is also given for the non-saturated case. Furthermore, we show that the channel busyness
ratio provides precise and robust information about the current network status, which can
be utilized to facilitate QoS provisioning. We have conducted a comprehensive simulation
study to verify our analytical results and to tune the 802.11 to work at the optimal point
with the maximum throughput and low delay and packet loss rate. The simulation results
show that by controlling the total traffic rate, the original 802.11 protocol can support strict
QoS requirements, such as those required by voice over IP or streaming video, and at the
same time, achieve a high channel utilization.
3.1 Introduction
Because of its simple deployment and low cost, the IEEE 802.11 wireless LAN [68]
has been widely used in recent years. It contains two access methods, i.e., Distributed
Coordination Function (DCF) and Point Coordination Function (PCF), with the former
being specified as the fundamental access method. Despite its popular use, currently only
35
36
Table 3–1:QoS requirements for multimedia servicesClass Application One-way transmission
delayDelayvariation
Packet lossrate
Real-time VoIP, videoconferencing <150ms(preferred),<400ms(limit)
1ms∗ 1%(video),3%(audio)
Streaming Streaming audio and video up to10s 1ms∗ 1%Best effort E-mail, file transfer, web
browsingminutes or hours N/A Zero
* Playout buffer (or jitter buffer) can be used to compensate for delay variation
best effort traffic is supported in DCF. Section3.2 describes the 802.11 protocol in more
detail.
For the IEEE 802.11 wireless LAN to continue to thrive and evolve as a viable wireless
access to the Internet, quality of service (QoS) provisioning for multimedia services is
crucial. As shown in Table3–1, for real-time, streaming, and non-real-time (or best effort)
traffic, the major QoS metrics include bandwidth, delay, delay jitter, and packet loss rate
[73, 74]. Guaranteeing QoS for multimedia traffic, however, is not an easy task given that
the 802.11 DCF is in nature contention-based and distributed, and hence renders effective
and efficient control very difficult. In addition, other problems such as hidden terminals
or channel fading make things worse. To address these challenges, current research works
([1, 125, 161, 153] and references therein) and the enhanced DCF (EDCF) defined in the
IEEE 802.11e draft [72, 31] tend to provide differentiated service rather than stringent QoS
assurance.
However, we have not yet well understood the question ofhow well the IEEE 802.11
WLAN can support QoSwhen many researchers start to believe that service differentiation
is the best that the 802.11 can achieve. In this chapter, we endeavor to address this problem
through both theoretical analysis (Section3.3) and simulations (Section3.4).
We develop an analytical model to assess the capability of the 802.11 for supporting
major QoS metrics, i.e., throughput, delay and delay variation, and packet loss rate. While
current literature on performance analysis is focused on the derivation of throughput or
delay in the saturated case, we find that the optimal operating point for the 802.11 to work
37
at lies in the non-saturated case.1 At this point, we analytically show that the maximum
throughput is achieved almost independent of the number of active nodes, and the delay
and delay variation is low enough to satisfy stringent QoS requirements of the real-time
traffic. Thus the 802.11 WLAN can perform very well in supporting QoS, as long as it is
tuned to the optimal point. Since an accurate indicator of the network status is essential
to effective tuning, we also demonstrate that the channel busyness ratio, which is easy to
obtain and accurately and timely represents the network utilization, can be used to design
schemes such as call admission control or rate control in the WLAN. We will present such
schemes in a subsequent chapter.
In Section3.5, we show that our analytical results are still valid even when the effect
of channel fading is taken into account. Also, we discuss the possible implications arising
due to the employment of a prioritized 802.11 DCF. Finally, Section3.6 concludes this
chapter.
3.2 Preliminaries
3.2.1 Operations of the IEEE 802.11
The basic access method in the IEEE 802.11 MAC protocol is DCF (Distributed Coor-
dination Function), which is based on carrier sense multiple access with collision avoidance
(CSMA/CA). Before starting a transmission, each node performs a backoff procedure, with
the backoff timer uniformly chosen from [0, CW-1] in terms of time slots, where CW is
the current contention window. When the backoff timer reaches zero, the node transmits a
DATA packet. If the receiver successfully receives the packet, it acknowledges the packet
by sending an acknowledgment (ACK). If no acknowledgment is received within a speci-
fied period, the packet is considered lost; so the transmitter will double the size of CW and
choose a new backoff timer, and start the above process again. When the transmission of a
1 Note that a similar fact has been observed for the Aloha or Slotted Aloha, where themaximum throughput is achieved only when traffic arrives at a certain rate [11].
38
packet fails for a maximum number of times, the packet is dropped. To avoid collisions of
long packets, the short RTS/CTS (request to send/clear to send) frames can be employed.
Note that the IEEE 802.11 MAC also incorporates an optional access method called
PCF (Point Coordination Function), which is only usable in infrastructure network config-
urations and is not supported in most current wireless cards. In addition, it may result in
poor performance as shown in previous research [126, 145]. In this chapter, we thus focus
on the 802.11 DCF.
3.2.2 Related Work
To date two threads of research have examined the property or performance of the
IEEE 802.11: performance analysis, and performance and/or QoS enhancements.
Performance Analysis:The first thread was devoted to building analytical models to
characterize the behavior of the 802.11, and deriving the protocol capacity or delay perfor-
mance [15, 21, 44, 58, 66, 134, 154, 160]. Bianchi [15] proposed a Markov chain model
for the binary exponential backoff procedure. By assuming the collision probability of
each node’s transmission is constant and independent of the number of retransmissions, he
derived the saturated throughput for the IEEE 802.11 DCF. Based on the saturated through-
put derived in Bianchi’s model, Foh and Zuckerman [44] used a Markovian state dependent
single server queue to analyze the throughput and mean packet delay. Cali et al. [21] stud-
ied the 802.11 protocol capacity by using ap-persistent backoff strategy to approximate
the original backoff in the protocol. Again, the focus is on the saturated throughput. In ad-
dition to collisions, Hadzi-Velkov and Spasenovski took the effect of frame error rate into
account in their analysis of saturated throughput and delay [58]. We derived an approxi-
mate probability distribution of the service time, and based on the distribution, analyzed
the throughput and average delay [154, 160]. As noticed, most works were focused on the
analysis of throughput and delay in the saturated case. Moreover, none of these systemati-
cally considered the delay and delay variation in the non-saturated case, let alone obtained
accurate estimates for them.
39
Performance and/or QoS Enhancements:The second thread of the research on
the 802.11 DCF explored various ways to improve throughput [13, 20, 85, 90] or provide
prioritized service, namely, service differentiation [1, 81, 114, 119, 125, 137].
Based on the work [21], Cali et al. attempted to approach the protocol capacity by
replacing the exponential backoff mechanism with an adaptive one [20]. Kim and Hou
developed a model-based frame scheduling algorithm to improve the protocol capacity of
the 802.11 [85]. Two fast collision resolution schemes were proposed by Bharghavan [13]
and Kwon et al. [90], respectively. The idea is to use two channels or to adjust backoff
algorithms to avoid collisions, thereby providing higher channel utilization.
To provide service differentiation, Ada and Castelluccia [1] proposed to scale the con-
tention window and use different inter frame spacing or maximum frame length for ser-
vices of different priorities. Two mechanisms [125], i.e., virtual MAC and virtual source,
were proposed to enable each node to provide differentiated services for voice, video, and
data. By splitting the transmission period into a real-time one and a non-real-time one,
the real-time traffic is supported with QoS guarantee [114]. However, the DCF mode was
dramatically changed. The Blackbust [119] provided high priority for the real-time traf-
fic. Unfortunately, it imposes special requirements on high priority traffic and is not fully
compatible with the existing 802.11 standard. In summary, if the semantics of the 802.11
DCF is maintained, all the works mentioned above can only support service differentiation.
Our studies can be considered to be a convergence between these two threads of re-
search; however, it improves on both sides. We thoroughly study the QoS performance
of the 802.11 in terms of throughput, delay and delay variation, and packet loss rate.
Moreover, we discover the optimal operating point at which, in addition to achieving the
theoretical maximum throughput, the 802.11 WLAN is capable of supporting strict QoS
requirements for the real-time traffic, rather than only providing prioritized service.
40
3.3 Analytical Study of the IEEE 802.11
This section focuses on the analysis of the performance of the IEEE 802.11 DCF. Note
that in the following analysis, the hidden terminal problem is ignored. This is because in
a typical wireless LAN environment, every node can sense all the others’ transmissions,
although it may not necessarily be able to correctly receive the packets from all other
nodes.
3.3.1 Maximum Throughput and Available Bandwidth
To simplify the analysis and yet reveal the characteristics of the IEEE 802.11 MAC
protocol, we assume that the traffic is uniformly distributed among the nodes. The total
number of nodes isn. The transmission probability for each node in any time slot ispt.
Note that here a time slot at the MAC layer could be an empty backoff time slot, a period
associated with a successful transmission, or a period associated with a collision [15, 68].
Obviously, we obtain the following equations:
pi = (1− pt)n
ps = npt(1− pt)n−1
pc = 1− pi − ps
(3.1)
wherepi is the probability that the observed backoff time slot is idle,ps is the probability
that there is one successful transmission, andpc is the collision probability that there are at
least two concurrent transmissions at the same backoff time slot. If we defineTsuc as the
average time period associated with one successful transmission, andTcol as the average
time period associated with collisions, we know [68]
Tsuc = rts + cts + data + ack + 3sifs + difs
Tcol = rts + sifs + cts + difs = rts + eifs, (3.2)
for the case where the RTS/CTS mechanism is used, and
Tsuc = data + ack + sifs + difs
Tcol = data∗ + ack timeout + difs, (3.3)
41
10-4
10-3
10-2
10-1
100
0
0.2
0.4
0.6
0.8
1
collision probability p
RTS/CTS scheme with different number of nodes
n=5 n=10 n=300
channel busyness ratio
channel utilization
normalized throughput
Figure 3–1:Channel busyness ratio and utilization
for the case where there is no RTS/CTS mechanism, wheredata anddata∗ (please re-
fer to [15] for derivation ofdata∗) are the average length, in seconds, for the successful
transmission and collision of the data packets, respectively. Thus, it can easily obtained
that
Ri = piσpiσ+psTsuc+pcTcol
Rb = 1−Ri
Rs = psTsuc
piσ+psTsuc+pcTcol
, (3.4)
whereσ is the length of an empty backoff time slot,Ri is thechannel idleness ratio, Rb
is thechannel busyness ratio, andRs is thechannel utilization. Once we obtainRs, the
normalized throughputs is expressed as
s = Rs × data/Tsuc, (3.5)
and the absolute throughput iss times the bit rate for data packets.
In most cases, we are more interested in the packet collision probabilityp observed at
each individual node, since it can be used to calculate QoS metrics for the traffic traversing
the node. In other words,p is the probability that one node encounters collisions when
it transmits. Also,p is the probability that there is at least one transmission among the
42
Table 3–2:IEEE 802.11 system parameters
Bit rate for DATA packets 2 MbpsBit rate for RTS/CTS/ACK 1 MbpsPLCP Data rate 1 MbpsBackoff Slot Time 20µs
SIFS 10µs
DIFS 50µs
Phy header 192 bitsMAC header 224 bitsDATA packet 8000 bits + Phy header
+ MAC headerRTS 160 bits + Phy headerCTS, ACK 112 bits + Phy header
neighbors in the observed backoff time slot. We thus linkp to pt as
p = 1− (1− pt)n−1 (3.6)
It can be seen that the collision probability increases with the increase in the number of
neighboring nodes or in the traffic at each of these nodes. In this sense,p reflects the
information about both the number of neighboring nodes and the traffic distribution at
these nodes.
According to the above equations, we can expressRb, Rs, ands as a function ofp,
which are shown in Fig.3–1. All the parameters involved are indicated in Table3–2and
most are the default values in the IEEE 802.11. In Fig.3–1, three cases, i.e.,n = 5, 10, and
300, are considered. It is important to note that for each specificn, there exists a maximum
value ofp, denoted byMAX(p), at which the network operates in thesaturated status,
i.e., each of then nodes always has packets in the queue and thus keeps contending for the
channel. Based on the works [15, 154, 160], we know that in saturated status, the larger the
number of nodes, the greater the collision probability. More precisely,MAX(p) = 0.105,
0.178, 0.290, 0.546, 0.701, 0.848 for n = 3, 5, 10, 50, 128, 300, respectively. Next we
present some important observations from Fig.3–1.
43
Channel busyness ratio: an accurate sign of the network utilization
First, we find that the channel busyness ratio is an injective function of the collision
probability. In fact, this can easily be proved. Second, whenp 6 0.1, Rb is almost the same
asRs, namely
Rs ≈ Rb. (3.7)
This is not hard to understand. When the collision probabilityp is very small, the chan-
nel resource wasted in collisions is so minor that it can be ignored. Third, the normalized
throughput almost stays unchanged whenp increases from0.1 to 0.2, although it reaches
the maximum value aroundp = 0.2. Finally, the maximum throughput is almost insensitive
to the number of active nodes. Given these observations and the fact that the throughput
is proportional toRs, we therefore could use the measured channel busyness ratioRb to
accurately estimate the throughput from zero to the maximum value. Note that this is very
simple and useful to each node: it can monitor the throughput of the whole WLAN by sim-
ply measuring the channel busyness ratio, which can be easily done since the IEEE 802.11
is a CSMA-based MAC protocol, working on the physical and virtual carrier sensing mech-
anisms. On the other hand, whenRb exceeds a certain thresholdthb, severe collisions can
be observed in the WLAN.
Maximum throughput
Fig. 3–1also shows that the throughput begins to decrease whenp is greater than a
certain value, and could decrease to zero whenp becomes very large. To ensure that the
network is always working with a high throughput, it is important for us to find the critical
turning point, i.e., when the IEEE 802.11 will achieve the maximum throughput, and how
the maximum throughput depends on network characteristics such as the number of node
n and traffic.
44
Combining Equation (3.1)(3.4)(3.6), we can writeRs as a function ofp. To obtain the
maximum throughput, we take the derivative ofRs with respect top and let it equal 0:
d
dpRs = 0, (3.8)
Meanwhile, we know thatp is upper bounded byMAX(p). Therefore, ifproot is the
root of Equation (3.8), we obtain the value ofp, denoted byp∗, with which the maximum
throughput is achieved:
p∗ = MIN(proot,MAX(p)), (3.9)
By applyingp∗ to Equation (3.5), we get the maximum normalized throughput of the IEEE
802.11 at differentn, as shown in Fig.3–2(a)and3–2(b). Here two important points are
noted.
Maximum throughput is achieved in the non-saturated case, rather than in the
saturated case whenn > 5. This is the very reason that we argue the networkshould
work in non-saturated case. Whenn > 5, the normalized throughput arrives at the maxi-
mum value aroundp = 0.196, much smaller than the collision probability in the saturated
status, i.e.,MAX(p), as clearly seen in Fig.3–2(a). p = 0.196 means there are 5 or 6
nodes simultaneously contending for the channel, which can be derived from the inverse
function ofMAX(p) as shown earlier. In addition, the maximum throughput achieved is
not sensitive to the number of nodes,n. It is rather stable asn increases.
Maximum throughput can be achieved by controlling the total input traffic rate
if no modification to the MAC protocol is allowed. As revealed in Fig.3–2(b), rather
than lettingp = p∗ for eachn, if we simply letp 6 0.1 or p 6 0.05, the achieved normal-
ized throughput only drops by0.96% and4.2%, respectively, compared to the maximum
normalized throughput. This is a very nice and important feature in the sense that as long
as each node in the network can keep the collision probabilityp below a certain value, say
0.1, instead ofp∗, which is dependent onn, the maximum throughput is well approached.
Thus, by maintaining a small collision probability in the Wireless LAN, which can be done
45
0 50 100 150 200 250 3000
0.2
0.4
0.6
0.8
1
Number of nodes n
prootMAX(p)
(a) proot and the collision probability in the satu-rated status
0 50 100 150 200 250 300
0.6
0.62
0.64
0.66
0.68
0.7
0.72
Number of nodes n
p=p*
p<=0.1p<=0.05p=MAX(p)
(b) Maximum normalized throughput with different con-straints on collision probability p
Figure 3–2:Collision probability and maximum normalized throughput with RTS/CTS andpayload size of 8000bits
through controlling the total input traffic rate, we can achieve high throughput. This in fact
is consistent with our observation in Fig.3–1, whereRb ≈ Rs whenp 6 0.1.
Note that in addition to achieving high throughput, keeping a small collision proba-
bility helps reduce delay. Since the time wasted due to collision could be neglected, the
contention delay is very small, which is crucial in providing low delay for the real-time
traffic and will be discussed in detail in section3.3.2.
Available bandwidth
The total available bandwidthBWa of the wireless LAN, or the available traffic rate
the network could further accommodate, can be easily obtained by subtracting the current
throughput from the maximum throughput.
46
Although it is not easy for each individual node to know the current total throughput
if it does not decode everything received, the node can be aware of the available bandwidth
by virtue of the channel busyness ratio, which could be easily acquired as described earlier.
Especially, whenp 6 0.1, Rb ≈ Rs. ThusBWa can be calculated as follows:
BWa =
BW (thb −Rb)data/Tsuc
0
, (thb > Rb)
, (thb 6 Rb), (3.10)
whereBW is the transmission rate in bits/s for the DATA packets, andthb is a threshold of
Rb and proportional to the maximum throughput.
Impact of payload size and the RTS/CTS mechanism
Thus far we have conducted our analysis without considering the impact of payload
size and the RTS/CTS scheme on the throughput. In this subsection we study this impact.
Fig. 3–3presents the analytical results, where RTS/CTS is used or not used, and various
payload sizes are considered.
We find no matter whether RTS/CTS is used, the throughput increases along with
the payload size. But this is not necessarily true for channel utilization, such as the case
that RTS/CTS is not used in the saturated case. The reason is the following. In the sat-
urated case, givenn, p is fixed. According to Equation (3.1)(3.3)(3.4)(3.6), Rs is almost
unchanged andRs ≈ ps
ps+pc.
It also can be observed that the maximum throughput is higher in the case that RTS/CTS
is not used than in the case that RTS/CTS is used, no matter how large the payload size is.
This is because that the maximum throughput is obtained whenp is relatively small and
thus the impact of collisions due to long data packets could be ignored. As a result, if
RTS/CTS is not used, the MAC overhead is reduced, which results in higher throughput.
On the contrary, in the saturated case where the collision probability is much higher, the
use of RTS/CTS does improve the throughput, especially when the payload size is large.
This is because the impact of collisions due to long data packets becomes significant in
47
0 200 400 600 800 1000 1200 1400 16000
0.2
0.4
0.6
0.8
1
payload size (bytes)
maximum w RTS/CTSmaximum w/o RTS/CTSsaturated w RTS/CTSsaturated w/o RTS/CTS
normalized throughput
channel utilization
(n=100)
Figure 3–3:Impact of payload size and the RTS/CTS mechanism
the saturated case and cannot be ignored; the exchange of RTS/CTS avoids long packet
collisions and thus reduces MAC overhead. Note that for the payload size that is shorter
than about 220 bytes in this parameter setting, the use of RTS/CTS is counterproductive
because of its relatively high overhead compared with the short payload size.
To sum up, to maximize the system throughput, the basic access without the RTS/CTS
mechanism is desired, as long as we can keep the collision probability at a relatively small
value.
3.3.2 Delay and Delay Variation
In this subsection, we study the delay and delay variation performance, which is an in-
tegral part of QoS provisioning in the 802.11 WLAN. As we know, the delay in the network
comprises three components, i.e., propagation delay, transmission delay, and queueing de-
lay. Note that in the WLAN, transmission delay contains a variable amount of delay caused
by MAC layer collisions and thus is not fixed. Henceforth, we define the sum of the propa-
gation delay and transmission delay as theservice timeat the MAC layer, which is the time
period from the instant that a packet begins to be serviced by the MAC layer to the instant
that it is either successfully transmitted or dropped after several failed retransmissions.
48
In the following, we will give an analysis of the service time and the queueing delay.
Then, the estimates of delay and delay variation are derived.
Service time distribution
Markov Chain Model for the Service Time
After examining the transmission procedure introduced in section3.2.1, we can con-
clude that the only outside factor is the collision probabilityp when the node attempts
the transmission. As discussed in the previous section,p is determined by the number of
neighboring nodes and the traffic distribution at those nodes. Thus we could assume that
p is independent of the backoff state of the node under consideration, although it is still
dependent on the backoff states of other nodes. We therefore can model the stochastic
process of the service time as a Markov chain, since the future state only depends on the
current state. Clearly, the transition probabilities are dependent on the collision probability
p, thus the service time distribution is a function ofp.
Probability Generating Function of the Service Time
The service time for each packet consists of multiple backoff time slots which could
be empty slots, collision slots, or successful transmission slots. As mentioned earlier, since
the length of an empty backoff slot is a fixed value andTsuc or Tcol depends on the length of
the header and data packet, which are discrete in bits, it is suitable to model the service time
distribution as a discrete probability distribution. To facilitate analysis, this distribution is
completely described with the probability generating function (PGF).
By applying the signal transfer function to the generalized state transfer diagram of
Markov Chain, we have derived the PGF of the service time,GTs(Z), which is quite accu-
rate as verified by ns-2 simulations ([154, 160]). On the other hand,
GTs(Z) =∑∞
i=0piZ
tsi , (3.11)
wheretsi(i > 0) are all possible discrete values of service timeTs andpi = Pr{Ts =
tsi}. We also found that givenp, the service time distribution is almost insensitive to
49
10-4
10-3
10-2
10-1
0
10
20
30
40
50
60payload size = 8000bits, with RTS/CTS
Collision probability
Ser
vice
tim
e (m
s)
MeanStanard Deviation
Figure 3–4:Mean and standard deviation of service time
n, while n only influences the maximum value ofp as shown in Fig.3–2(a). Thus, the
following delay analysis is valid for differentn and we need not specify the value ofn.
Mean and Variance of the Service Time
Given Equation3.11, it is easy to obtain any moment of the service timeTs by taking
the derivative ofGTs(Z) with respect toZ. Specifically, the mean and variance are
E[Ts] = ∂∂Z
GTs(Z)∣∣Z=1
= G′Ts(1)
V AR[Ts] = G′′Ts(1) + G
′Ts(1)− [G
′Ts(1)]2
(3.12)
Fig. 3–4demonstrates the mean and variance of the service time as a function of the
collision probabilityp. It can be seen that whenp > 0.1, both the mean and the variance
increase exponentially withp. On the other hand, we have found that whenp 6 0.1,
the achieved throughput is almost the same as the maximum achievable throughput. To
provide a delay guarantee for some delay-sensitive applications such as voice over IP, and
achieve approximately maximum throughput, the wireless LAN should keep the collision
probability less than0.1.
Packet delay bound and delay variation estimate
Because there is typically one shared outgoing queue for all packets from different
applications in each mobile node, we can model each node as a queueing system. In the
queueing system, the packet arrival process is determined by the aggregate traffic behavior
50
of all applications that emit packets to the MAC layer; the service time follows the dis-
tribution described in previous subsection. After building such a queueing model, we can
derive accurate estimates of delay and delay variation in the non-saturated case. Notice that
the number of packets waiting in the queue,Nq, almost equals zero in the non-saturated
case especially forp 6 0.1 as shown in the papers [154, 160] and verified in our simulation
later. Otherwise each node will contend for the channel in most times and result in a much
higherp.
Delay Bound with Known Packet Arrival Rate
We start the analysis with a simple case, i.e., the packet arrival follows some process
with a known (or estimated) arrival rate. If the arrival process is Poissonian, the system can
be modeled as a M/G/1 system [86]. Accordingly, the mean of the packet delayT , which
consists of the waiting time in the queue and the service time, is
E[T ] = Ts +λTs2
2(1− ρ)(3.13)
whereλ is the average arrival rate of the input traffic andρ = λ × Ts < 1. If the arrival
process follows a general distribution, then we get a G/G/1 system, for which we have an
upper boundTU for T [87],
E[T ] 6 Ts +λ(σ2
a + σ2Ts
)
2(1− ρ)≡ TU (3.14)
whereσTs
andσa are the standard deviations of the service time and packet arrival process,
respectively.
So far, these results hold whenρ < 1 for the system with infinite buffer. The actual
delay upper bound should be less thanTU because we do not count the packets dropped
due to limited buffer, which will have a long delay in the system with infinite buffer. In
fact, because we are only interested in the non-saturated case with an almost empty queue,
the above results are thus accurate.
Delay Bound and Delay Variation with Unknown Packet Arrival Rate
51
In the previous paragraph we only give the mean of the packet delayT with the es-
timation dependent on the specific packet arrival process and on the accurate estimate of
λ. In reality, however, this approach could be infeasible ifλ is hard to estimate when the
instantaneous packet arrival rate at each individual node changes dramatically. We thus
embark on deriving the accurate estimates for delay and delay variation in a more general
case, i.e., without any knowledge aboutλ.
Let Tsi denote the service time for thei-th packet at a node under consideration.
Since the backoff timer is reset for every packet to be transmitted [68], {Ts1, T s2, ...} are
iid (independently and identically distributed) random variables. LetTi be the system time
(or delay) of thei-th packet including the service time and the waiting time in the queue,
Ri be the residual service time seen by thei-th packet, andNi be the number of packets
found waiting in the queue by thei-th packet at the instant of arrival.
Based on such notations, we obtain
Ti = Tsi + Ri +i−1∑
j=i−Ni
Tsj (3.15)
As previously discussed,Ni almost equals 0 in the non-saturated case, so we can
approximateTi as
Ti∼= Tsi + Ri (3.16)
Notice thatRi is the residual service time of the (i−Ni − 1)-th packet, thus we have
Tsi 6 Ti 6 Tsi + Tsi−Ni−1 (3.17)
By taking expectations on both sides of Equation (3.17) , we have
E[Ts] 6 E[T ] 6 2E[Ts] (3.18)
52
Since it is difficult to derive the variation ofRi in general cases, we use the standard devi-
ation of the service timeσTs
to approximate that ofTi, i.e.,σT as follows:
σTs
6 σT≈ kσ
Ts(3.19)
wherek is a constant value. From ns-2 simulation results as presented later,k = 1, or 2
gives a good approximation.
In fact, by applying the Residual Life Theorem [86], we could obtain more accurate
approximations ofE[T ] andσT. Let r be the residual service time observed at any time
instant during the service. If the service time distribution isFTs(x), then the pdf ofr,
denoted byfr(x), can be expressed asµ(1− FTs(x)), whereµ = 1E[Ts]
. We thus have
E[r] =∫∞0
rfr(x)dx = µ2E[Ts2]
E[r2] =∫∞0
r2fr(x)dx = µ3E[Ts3]
E[R] = 0× P (idle) + E[r]P (busy) = r0µ2
E[Ts2]
E[R2] = 0× P (idle) + E[r2]P (busy) = r0µ3
E[Ts3]
V ar[R] = r0µ3
E[Ts3]− ( r0µ2
E[Ts2])2
(3.20)
wherer0 = P (busy) is the probability that the server is busy, i.e., there is one packet
contending for the channel or being transmitted. Becauser0 6 1, we obtain
E[T ] ≈ E[Ts] + E[R] 6 E[Ts] +E[Ts2]
2E[Ts]≡ TUR (3.21)
V ar[R] =(r0−r2
0)µ
3E[Ts3] + r2
0(µ3E[Ts3]− (µ
2E[Ts2])2)
=(r0−r2
0)µ
3E[Ts3] + r2
0V ar[r]
6 µ12
E[Ts3] + V ar[r] = 5µ12
E[Ts3]− (µ2E[Ts2])2
(3.22)
V ar[T ] ≈ V ar[Ts] + V ar[R]
6 V ar[Ts] + 5E[Ts3]12E[Ts]
− (E[Ts2]2E[Ts]
)2 ≡ σ2TUR
(3.23)
53
Fig. 3–5(a)illustrates both the lower bound and the upper bound for the packet delay
T . We can see that the upper bound and lower bound are very close, thus we can charac-
terize the delay with high accuracy, although the exact value is not available. As expected,
whenp < 0.1, TUR is tighter than2 × E[Ts]. This is desirable since we focus on the
non-saturated case wherep is small. As revealed by the bounds, the mean of the system
delayT is small:5ms < E[T ] < 10ms whenp 6 0.01, andE[T ] < 30ms whenp 6 0.1.
This is sufficient for real-time applications such as VoIP.
The standard deviation of the system delay is illustrated in Fig.3–5(b). As shown
in the figure, it is also small:σTs < 30ms whenp 6 0.1. Whenp 6 0.02, the standard
deviation is much smaller thanE[Ts] (and thanE[T ] sinceE[Ts] is the lower bound).
Note thatσTURis relatively large whenp 6 0.002. This is because the approximation
in Equation (3.22) usesr0 > 0.5; however,r0 should have been smaller than 0.5 when
p 6 0.002.
As a special case, if the packet arrival process is Poissonian, thenr0 = ρ = λE[Ts] <
1. Thus
E[T ] ≈ E[Ts] + E[R] = E[Ts] +1
2λE[Ts2] ≡ TURM , (3.24)
V ar[T ] ≈ V ar[Ts] + V ar[R]
= V ar[Ts] + 13λE[Ts3]− (1
2λE[Ts2])2 ≡ σ2
TURM.
(3.25)
Finally, we comment on the results of delay and delay variation. First, all the above
results are derived for the non-saturated case, which means the traffic intensityρ < 1 and
the collision probabilityp 6 0.1. Second, the approximation in Equation (3.16) relies on
the assumption that there is no bulk arrival. Although this assumption is common in the
analysis of queueing systems and is true for both the Poisson arrival process and determin-
istic arrival process, in practice, bursty traffic such as TCP traffic violates it. Consequently,
the bursty traffic induces not only longer waiting time in the queue, but also higher colli-
sion probability in the burst period leading to longer service time. For the above results to
remain valid, it is necessary to regulate arriving traffic at the MAC layer.
54
10-5
10-4
10-3
10-2
10-1
100
10-3
10-2
10-1
100
collision probability p
Del
ay b
ound
(s) E[Ts]
2 × E[Ts]TUR
payload size = 8000 bits, with RTS/CTS
(a) Delay bound
10-5
10-4
10-3
10-2
10-1
100
10-4
10-3
10-2
10-1
100
collision probability p
Sta
ndar
d de
viat
ion
of d
elay
(s) σTs
2 × σTs3 × σTsσT
UR
payload size = 8000 bits, with RTS/CTS
(b) Standard deviation of delay
Figure 3–5:Packet delay
3.3.3 Packet Loss Rate
At the MAC layer, a packet may be lost due to queue overflow or MAC collisions.
Once a packet is queued, a node attempts to transmit it for a certain number of times,
denoted byα. If all the attempts fails due to collisions, the packet gets dropped. Given the
collision probabilityp, the packet dropping probabilityPd due to MAC collisions is
Pd = pα, (3.26)
When the packet blocking probabilityPblock, i.e., the probability that the queue is
full when a packet arrives, is very small, as for the non-saturated case whereNq∼= 0 as
mentioned earlier, the total packet loss ratePl of the queueing system can be approximated
55
asPd, i.e.,
Pl ≈ pα (3.27)
We see whenp 6 0.1 andα = 7 [68], Pl 6 10−7. Obviously, this satisfies the packet
loss requirements of most applications such as VoIP.
On the contrary, a much higher packet loss rate is expected if the network is in the
saturated case for the following reason. On one hand, the collision probabilityp gets sig-
nificantly large, resulting in considerable packet losses due to collisions. On the other hand,
each packet experiences a much longer system delay in the saturated case compared to that
in the non-saturated case, which leads to a full queue at most times and hence blocks newly
arriving packets.
Before ending this section, we make a few remarks about the analytical model. Note
that all the performance metrics are expressed as a function of the collision probability.
However, obtaining the collision probability is not easy. There are two possible approaches.
One is to analytically derive the collision probability, which requires the full knowledge of
the traffic arrival models at the node of interest and at all the other nodes in the network
as well. The other is to measure it through experiments. Unfortunately, it is not amenable
to practical measurement due to the lack of measured values or the inability of each node
to distinguish collisions from channel fading. Therefore, we propose the channel busyness
ratio as a good substitute for the collision probability for the following reasons. First,
as mentioned earlier, the channel busyness ratio is an injective function of the collision
probability. This indicates that the channel busyness ratio can also serve as the input of
the analytical model. Unlike the collision probability, the channel busyness ratio is easy
to measure in practice because the IEEE 802.11 is essentially based on carrier sensing.
Second, as shown for the non-saturated case, the channel busyness ratio can accurately
represent the channel utilization or the normalized throughput, and hence can be used to
facilitate network control mechanisms such as call admission control over the real-time
56
traffic and rate control over the best effort traffic. Accordingly, all the performance metrics
are presented as a function of the channel busyness ratio in the following simulation results.
3.4 Simulation Study of the IEEE 802.11
The simulation study in this section serves two purposes. First, it is aimed at verifying
our analytical study in section3.3. Second, while our analytical results have shown that the
IEEE 802.11 can operate at an optimal point that leads to maximum throughput, low delay,
and almost zero packet loss rate, they do not reveal a specific way to achieve this optimal
operating point. Thus we demonstrate how to reach and retain the optimal point through
simulations.
3.4.1 Simulation Configuration
The simulation study is conducted using the ns-2 simulator. The IEEE 802.11 system
parameters are summarized in Table3–2. The RTS/CTS mechanism is used. We simulate
different number of mobile stations in the wireless LAN. Every node initiates an identical
UDP/CBR traffic flow to a randomly selected neighbor. The queue length at each node is
10 packets.
As revealed, whether the network operates in the non-saturated or saturated case can
be determined by controlling the collision probabilityp. Also, the optimal operating point
lies wherep ≈ 0.1. Without changing the 802.11 protocol, we use two techniques to
controlp in order to locate the optimal point. One is to schedule the start time of the UDP
flows, which will be described below; the other is to gradually increase the sending rate of
each flow from 0. In contrast, the saturated case can be easily simulated by boosting the
traffic load to a much higher level than what the network can support.
Deterministic minimum-collision-probability scheduling (DPS)
To minimize the collision probability, we schedule UDP flows in such a way that
start time of one flow is separated from another by a constant periodtint/n, wheretint is
the packet inter-arrival time for each flow. So if the aggregate traffic rate is less than the
network capacity, i.e., the network can handle all the arriving packets from each flow, the
57
0 50 100 150 200 250 300
0.5
0.6
0.7
(a) number of nodes nM
axim
umN
orm
aliz
ed T
hrou
ghpu
t
DRSDPSSaturated CaseAnalysis
0 50 100 150 200 250 3000
10
20
(b) number of nodes n
Del
ay (
ms)
DRSDPS
0 50 100 150 200 250 3000
10
20
(c) number of nodes n
Del
ay (
s)
Saturated Case
Figure 3–6:Simulation results when payload size = 8000bits
collision probability could be reduced to zero. In this case, there is no queueing delay and
the system delay is the random backoff time plus one packet transmission time. We call
this scheduling deterministic minimum-collision-probability scheduling.
Distributed randomized scheduling (DRS)
However, in a distributed WLAN environment, it is very difficult for each node to
exactly know the start time of all the flows and schedule its own flows accordingly to
avoid collisions. Therefore, to simulate a more realistic scenario, we cannot adopt the de-
terministic scheduling described above. We thus employ a simple yet effective scheduling
algorithm that starts each flow at randomized times. Specifically, the start time of each flow
is uniformly chosen in[0, tint], which keeps all the nodes from contending for the channel
at the same time. As a result, the collision probability is reduced and no node needs to care
about other nodes’ transmission schedule.
58
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
(a) Channel busyness ratio
Nor
mal
ized
thr
ough
put
SimulationAnalysis
0 0.2 0.4 0.6 0.8 110
-3
10-2
10-1
100
(b) Channel busyness ratio
Mea
n of
del
ay (
s)
0 0.2 0.4 0.6 0.8 1
10-4
10-2
100
(c) Channel busyness ratio
Sta
nada
rd d
evia
tion
of d
elay
(s)
E[T]E[Ts]2 × E[Ts]TUR
σTσT
S2 × σT
SσT
UR
Figure 3–7:Simulation results when n=50 and payload size = 8000bits
3.4.2 Simulation Results
Saturated case vs. non-saturated case
In Fig. 3–6, for the non-saturated case, we see that the normalized throughput that
DPS achieves is slightly higher than the theoretical maximum throughput, since it uses
perfect scheduling and hence reduces the collision probability to zero. Likewise, the nor-
malized throughput that DRS achieves is close to the theoretical maximum throughput,
since it greatly reduces the collision probability. On the contrary, the throughput in the
saturated case is much lower. As is consistent with the analytical results, the non-saturated
throughput is almost independent of the number of nodes, whereas the saturated through-
put declines significantly with the increase in node number. For delay, we see that there is
difference in orders of magnitude for these two cases. Also, while the delay stays almost
unchanged in the non-saturated case as the number of node increases, it increases in the
saturated case. This is due to the fact that in the latter case, each node always has packets to
transmit and keeps contending for the channel, which greatly increases the collision prob-
ability. As a result, each packet suffers from both long queueing delay and service time.
Note that DPS enjoys a shorter delay than DRS since it reduces the collision probability
more effectively.
Optimal operating point
As Fig. 3–6shows, DRS yields a comparable performance with that of DPS, we thus
use DRS as our scheduling algorithm henceforth. By gradually increasing the sending rate
59
of each flow, we are able to locate the optimal operating point as shown in Fig.3–7 and
3–8. While Fig. 3–7 presents the performance of throughput, delay, and delay variation
as a function of the channel busyness ratio, Fig.3–8shows the behavior of average queue
length and packet loss rate when input traffic increases.
Two important observations are made. First, we observe there is a turning point in
all the curves where the channel busyness ratio is about 0.95. Before that point, as the
input traffic increases, the throughput keeps increasing, the delay and delay variation are
small and almost unchanged, the queue at each node is empty, and the packet loss rate is
zero. Note that with the small delay and delay variation, the delay requirements of the
real-time traffic can be adequately supported. After that point, the queue and the collision
probability forms a positive feedback loop. A slightly larger collision probability causes
the queue to build up. The queue, even with one packet always in it, will force the MAC to
keep contending for the channel, thereby exponentially increasing the collision probability,
which in turn forces more packets to accumulate in the queue. Then, catastrophic effects
take place: the throughput drops quickly, the queue starts to build up and the delay and
delay variation increase dramatically, and the packet suffers from a large loss rate. Clearly,
this turning point is the optimal operating point that we should tune the network to work
around, where the throughput is maximized and the delay and delay variation are small.
Second, as shown in Fig.3–7, the simulation results verify our analytical study of the
IEEE 802.11. The throughput curves obtained from analysis and simulation coincide with
each other. Also as indicated in our analytical study, before the optimal point is reached,
the network stays in the non-saturated case and the queueing delay is almost zero; thus
the packet delayT can be accurately estimated by the service timeTS, which provides
the lower bound. Meanwhile, the mean and variation are well bounded byTUR andσTUR
before the turning point as shown in equations (3.18, 3.19, 3.21, 3.23).
60
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
50
100
(a) Channel busyness ratioA
vera
ge q
ueue
leng
th
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.5
1
(b) Channel busyness ratio
Pac
ket
loss
rat
e
Figure 3–8:Simulation results when n=50 and payload size = 8000bits
3.5 Discussions
3.5.1 Impact of Fading Channel
So far it is assumed the channel is perfect. However, when channel fading is figured
in, packet losses are no longer due to collisions only; they may well be caused by channel
fading. Practically, it is extremely difficult to distinguish these two causes. As a matter
of fact, the 802.11 responds in the same way if the transmitter does not correctly receive
its expected frame, which may be either CTS or ACK, no matter whether this is due to
collision or channel fading. Based on this observation, we can incorporate the packet error
probability into the collision probability as the recent work [58] did, and all the analytical
results still hold.
It is important to note that normally channel fading is not a serious problem in the
WLAN, which features low node mobility and relatively stable channel. However, if the
packet error probability due to channel fading becomes significant, i.e., the equivalent col-
lision probability is high in our model, the QoS level will be hurt. Our analytical results
show that in this case, as illustrated in Fig.3–1, 3–4, 3–5(a), and3–5(b), the normalized
throughput decreases, the service time increases, the mean and variation of delay increase
61
along with the service time, and packet loss rate increases as well. However, with our an-
alytical model, we can still calculate the maximum throughput, packet loss rate, and give
accurate estimates of delay and delay variation according to Equations (3.5), (3.18), (3.19),
(3.21), and (3.23).
3.5.2 Impact of Prioritized MAC
Since our focus is on how well the original IEEE 802.11 DCF can support QoS, we do
not change the MAC protocol in the analysis and simulations. Within either the real-time
traffic or the best effort traffic, no differentiation is committed. As a result, all the real-time
traffic, including CBR and VBR traffic, equally shares the delay and delay variation, which
sometimes is not flexible enough. If a prioritized 802.11 MAC protocol similar to [1, 125]
is adopted, we are able to provide priority within the real-time traffic. As a result, the high
priority real-time traffic receives smaller delay variation, whereas the low priority real-time
traffic receives higher delay variation [33].
3.6 Conclusion
Despite considerable efforts spent on performance analysis and QoS provisioning for
the IEEE 802.11 WLAN, the question of how well it can support QoS remains vague. In
this chapter, we clearly answer this question through thorough studies, which constitutes
our key contribution.
We have analytically characterized the optimal operating point for the 802.11 WLAN,
and shown that if the network is tuned to work at this point, in addition to achieving theoret-
ical maximum throughput, it can support the major QoS metrics such as throughput, delay
and delay variation, and packet loss rate, as required by real-time services. This is further
validated via extensive simulations. We therefore clarify that the IEEE 802.11 WLAN can
provide statistical QoS guarantees, not just differentiated service, for multimedia services.
We also demonstrate that the channel busyness ratio can accurately and timely represent
the network utilization; hence it can be used to facilitate the regulation of total input traffic
to support QoS.
CHAPTER 4A CALL ADMISSION AND RATE CONTROL SCHEME FOR MULTIMEDIA
SUPPORT OVER IEEE 802.11 WIRELESS LANS
Quality of service (QoS) support for multimedia services in the IEEE 802.11 wireless
LAN is an important issue for such WLANs to become a viable wireless access to the
Internet. In this chapter, we endeavor to propose a practical scheme to achieve this goal
without changing the channel access mechanism. To this end, a novel call admission and
rate control (CARC) scheme is proposed. The key idea of this scheme is to regulate the
arriving traffic of the WLAN such that the network can work at an optimal point. We first
show that the channel busyness ratio is a good indicator of the network status in the sense
that it is easy to obtain and can accurately and timely represent channel utilization. Then we
propose two algorithms based on the channel busyness ratio. The call admission control
algorithm is used to regulate the admission of real-time or streaming traffic and the rate
control algorithm to control the transmission rate of best effort traffic. As a result, the real-
time or streaming traffic is supported with statistical QoS guarantees and the best effort
traffic can fully utilize the residual channel capacity left by the real-time and streaming
traffic. In addition, the rate control algorithm itself provides a solution that could be used
above the media access mechanism to approach the maximal theoretical channel utilization.
A comprehensive simulation study in ns-2 has verified the performance of our proposed
CARC scheme, showing that the original 802.11 DCF protocol can statically support strict
QoS requirements, such as those required by voice over IP or streaming video, and at the
same time, achieve a high channel utilization.
4.1 Introduction
In recent years, the IEEE 802.11 wireless LAN [68] has been increasingly employed
to access the Internet because of its simple deployment and low cost. According to the
62
63
IEEE 802.11 standard, the medium access control (MAC) mechanism contains two access
methods, i.e., Distributed Coordination Function (DCF) and Point Coordination Function
(PCF), with the former being specified as the fundamental access method. Despite its
popular use, currently only best effort traffic is supported in DCF. Section4.2describes the
802.11 protocol in more detail.
Quality of service (QoS) provisioning for multimedia services including voice, video,
and data is crucial for the IEEE 802.11 wireless LAN to continue to thrive and evolve as a
viable wireless access to the Internet. Although there are several schemes ([96, 9, 88, 34,
124]) which use PCF mode to support QoS for real-time traffic, we do not discuss further
along this line because PCF is an optional access method ([68]) which is only usable on
infrastructure network configurations and not supported in most current wireless cards. In
addition, it may result in poor performance as shown in the papers [94, 145, 126]. In
this chapter, we focus on the 802.11 DCF mode. However, guaranteeing QoS for real-time
traffic in the 802.11 DCF mode is not an easy task given that it is in nature contention-based
and distributed, and hence render effective and efficient control very difficult. Furthermore,
other problems such as hidden terminals or channel fading make things worse.
In face of these challenges, considerable research ([1, 81, 107, 114, 119, 125, 137]) has
been conducted to enhance the IEEE 802.11 WLAN to support service differentiation or
prioritized service [18]. Ada and Castelluccia [1] proposed to scale the contention window,
use different inter frame spacing or maximum frame length for services of different priority.
As a matter of fact, similar ideas have recently been adopted in the enhanced DCF (EDCF)
defined in the IEEE 802.11e draft ([72, 31, 99]). two mechanisms [125], i.e., virtual MAC
and virtual source, were proposed to enable each node to provide differentiated services for
voice, video, and data. By modifying the 802.11 MAC, a distributed priority scheduling
scheme was designed to approximate an idealized schedule, which supports prioritized
services [81]. Similarly, by splitting the transmission period into a real-time one and a
non-real-time one, real-time traffic is supported with QoS guarantee [114]. However, the
64
DCF mode was dramatically changed. The Blackbust [119] provided high priority for real-
time traffic. Unfortunately, it imposes special requirements on high priority traffic and is
not fully compatible with the existing 802.11 standard. In summary, if the semantics of the
802.11 DCF is maintained, only differentiated service, rather than stringent QoS assurance,
is supported .
Meanwhile, much effort has also been spent in improving throughput for the 802.11
DCF ([12, 13, 16, 20, 85, 90]). Based on the work [21], Cali et al. attempted to approach
the protocol capacity by replacing the exponential backoff mechanism with an adaptive one
[20]. Kim and Hou developed a model-based frame scheduling algorithm to improve the
protocol capacity of the 802.11 [85]. Two fast collision resolution schemes were proposed
by Bharghavan [13] and Kwon et al. [90], respectively. The idea is to use two channels or to
adjust backoff algorithms to avoid collisions, thereby providing higher channel utilization.
It is important to note that all these works focused on the throughput in the saturated case.
In our previous work [150], We have shown through both theoretical and simulation
studies that the IEEE 802.11 DCF protocol could satisfy the QoS requirements of the real-
time and streaming traffic while achieving the maximal channel utilization when it is work-
ing at the optimal point corresponding to a certain amount of arriving traffic. If the arriving
traffic is heavier than this threshold, the WLAN enters saturation, resulting in significant
increase in delay and decrease in throughput; on the other hand, if the arriving traffic is less
than this threshold, channel capacity is wasted. In reality, however, to tune the network that
operates on the basis of channel contention to work at this point requires an effective and
efficient control algorithm to regulate the input traffic [109]. Therefore, we are motivated
to design a call admission and rate control scheme (CARC) (Section4.4). Specifically, call
admission control (CAC) is used for real-time or streaming traffic, and rate control (RC)
for best effort data traffic.
Essentially, the CARC scheme has the following distinguishing features:
65
• It utilizes an new measure of network status, the channel busyness ratio to exercise
traffic regulation, which is easy to obtain and can accurately and timely represent the
network utilization as shown in Section4.3.
• The call admission control scheme is able to provide statistical QoS guarantees for
real-time and streaming traffic.
• The rate control scheme allows best effort traffic to utilize all the residual channel
capacity left by the real-time and streaming traffic while not violating their QoS
metrics, thereby enabling the network to approach the maximal theoretical channel
utilization.
• Since each node keeps track of the channel busyness ratio locally to conduct control,
this scheme is distributed and suits well with the DCF mode.
We have implemented the CARC scheme inns-2[106], and conducted a comprehen-
sive simulation study to evaluate its performance. As shown in Section4.5, CARC is able
to support real-time services, such as voice and video, with QoS guarantees, and achieve
high throughput by allowing best effort traffic to make full use of the residual channel ca-
pacity. This confirms that the 802.11 WLAN cannot only support differentiated service,
but also support strict QoS.
In Section4.6, we discuss the effect of channel fading on our scheme and the possible
implications arising due to the employment of a prioritized 802.11 DCF. Finally, Section
4.7concludes this chapter.
4.2 Background
4.2.1 Operations of the IEEE 802.11 DCF Protocol
The basic access method in the IEEE 802.11 MAC protocol is DCF (Distributed coor-
dination function), which is based on carrier sense multiple access with collision avoidance
(CSMA/CA). Before starting a transmission, each node performs a backoff procedure, with
the backoff timer uniformly chosen from [0, CW] in terms of time slots, where CW is the
current contention window. If the channel is determined to be idle for a backoff slot, the
66
backoff timer is decreased by one. Otherwise, it is suspended. When the backoff timer
reaches zero, the node transmits a DATA packet. If the receiver successfully receives the
packet, it acknowledges the packet by sending an acknowledgment (ACK) after an inter-
val called short inter-frame space (SIFS). So this is a two-way DATA/ACK handshake. If
no acknowledgment is received within a specified period, the packet is considered lost; so
the transmitter will double the size of CW and choose a new backoff timer, and start the
above process again. When the transmission of a packet fails for a maximum number of
times, the packet is dropped. To reduce collisions caused by hidden terminals [14], the
RTS/CTS (request to send/clear to send) mechanism is employed. Therefore, a four-way
RTS/CTS/DATA/ACK handshake is used for a packet transmission.
In the IEEE 802.11, the network can be configured into two modes, i.e., infrastructure
mode or ad hoc mode. In the infrastructure mode, an access point (AP) is needed to partic-
ipate in the communication between any two nodes, whereas in the ad hoc mode, all nodes
can directly communicate with each other without the participation of an AP.
4.2.2 QoS Requirements for Multimedia Services
As the Internet expands its supported traffic from best effort data to a variety of mul-
timedia services, including video conferencing, voice over IP (VoIP), streaming audio and
video, WWW, e-mail, and file transfer, etc., QoS provisioning has become an important
issue. The commonly accepted QoS metrics mainly include bandwidth, delay, delay jitter
(i.e., dalai variation), packet loss rate (or bit error rate). According to their QoS require-
ments, current multimedia services can be grouped into three classes: real-time, streaming,
and non-real-time (or best effort).
Real-time: Real-time traffic has stringent requirements in delay and delay jitter,
which is necessary for interactive communications like VoIP and videoconferencing. Ac-
cording to the ITU standards [73, 74], the one way transmission delay should be prefer-
ably less than 150ms, and must be less than 400ms. However, it is not very sensitive to
packet loss rate. Typically, a loss rate of 1% is acceptable for real-time video with rate
67
16 ∼ 384Kbps and a loss rate of 3% for real-time audio with rate4 ∼ 64Kbps. Because
delayed packets are not tolerable, retransmission of lost packets is not useful. Thus, UDP
is used to transmit real-time traffic.
Streaming: Streaming audio or video belongs to this class. Compared with real-time
traffic, it is less sensitive to delay or delay jitter. At the expense of increased delay, playout
buffer (or jitter buffer) can be used to compensate for delay jitter in the range of 20∼ 50
ms. As specified in the ITU standard ITU-G1010 [74], acceptable delay may be up to 10
seconds, while the packet loss rate is about 1%. Streaming traffic is normally transported
via UDP, although a retransmission strategy can be added in the application layer.
Non-real-time: Non-real-time services comprise e-mail, file transfer, and web brows-
ing. Most non-real-time services are tolerant to delay ranging from seconds to minutes or
even hours. However, the data to be transferred has to be received error-free, which means
reliable transmission is required. So non-real-time traffic is transported with TCP.
4.3 Channel Busyness Ratio
In this section, we give the definition of the channel busyness ratio and elaborate on
why and how it can be used to represent the network status.
4.3.1 Definition of Channel Busyness Ratio
At the MAC layer, a backoff time slot could be an empty slot, a period associated with
a successful transmission, or a period associated with a collision ([68, 15, 154, 160]). Let
pi, ps, andpc be the probabilities that the observed backoff time slot is one of the three kinds
of slots, respectively. LetTsuc be the average time period associated with one successful
transmission, andTcol be the average time period associated with collisions. Then
Tsuc = rts + cts + data + ack + 3sifs + difs
Tcol = rts + cts timeout + difs = rts + eifs, (4.1)
68
for the case where the RTS/CTS mechanism is used, and
Tsuc = data + ack + sifs + difs
Tcol = data∗ + ack timeout + difs = data∗ + eifs, (4.2)
for the case where there is no RTS/CTS mechanism, wheredata anddata∗ (please refer
to [15] for derivation ofdata∗) are the average length, in seconds, for the successful trans-
mission and collision of the data packets, respectively. Notice that the sources keep silent
when waiting CTS packets, and any station which senses a collision will set the network
allocation vector (NAV) [68] with aneifsperiod. Thus, it can be easily obtained that
Ri = piσpiσ+psTsuc+pcTcol
Rb = 1−Ri
Rs = psTsuc
piσ+psTsuc+pcTcol
, (4.3)
whereσ is the length of an empty backoff time slot,Ri is defined as thechannel idleness
ratio, Rb thechannel busyness ratio, andRs thechannel utilization. Clearly, the channel
busyness ratioRb can also be thought of as the ratio of time that the channel is busy due
to successful transmissions as well as collisions to the total time. Once we obtainRs, the
normalized throughputs is expressed as
s = Rs × data/Tsuc, (4.4)
and the absolute throughput iss times the bit rate for data packets.
4.3.2 Channel busyness ratio: an accurate sign of the network utilization
First, we build the relationship between the channel busyness ratio and the packet
collision probability, denoted byp, that a node may experience.
69
We assume the total number of nodes in a WLAN isn. The transmission probability
for each node in any backoff time slot ispt. Obviously, we obtain the following equations:
pi = (1− pt)n
ps = npt(1− pt)n−1
pc = 1− pi − ps
(4.5)
Meanwhile,p can be expressed in terms ofpt as follows:
p = 1− (1− pt)n−1 (4.6)
According to Equation (4.3)(4.5)(4.6), we can expressRb, Rs, ands as a function of
p, which are shown in Fig.4–1. All the parameters involved are indicated in Table4–1and
most are the default values in the IEEE 802.11. In Fig.4–1, three cases, i.e.,n = 5, 10,
and300, are considered.
Several important observations are made for Fig.4–1. First, we find that the channel
busyness ratio is an injective function of the collision probability. In fact, this can easily be
proved. Second, whenp 6 0.1, Rb is almost the same asRs, namely
Rs ≈ Rb. (4.7)
This is not hard to understand. When the collision probabilityp is very small, the channel
resource wasted in collisions is so minor that it can be ignored. Third, the maximal through-
put is almost insensitive to the number of active nodes. As a matter of fact, we have shown
in our previous work [150] that the point where the maximal throughput is achieved is the
optimal working point for the network where the collision probability is very small and the
packet delay and delay jitter are small enough to support the QoS requirements of real-time
traffic. Given these observations and the fact that the throughput is proportional toRs as
shown in Equation (4.4), we therefore could use the measured channel busyness ratioRb
to accurately estimate the throughput from zero to the maximum value.
70
10-4
10-3
10-2
10-1
100
0
0.2
0.4
0.6
0.8
1
collision probability p
RTS/CTS scheme with different number of nodes
n=5 n=10 n=300
channel busyness ratio
channel utilization
normalized throughput
Figure 4–1:Channel busyness ratio and utilization
Next, we present some ns-2 simulation results in Fig.4–2, which shows the perfor-
mance of throughput, delay, and delay variation as a function of the channel busyness ratio.
Again, the IEEE 802.11 system parameters are summarized in Table4–1. Every node initi-
ates an identical UDP/CBR traffic flow to a randomly selected neighbor. The queue length
at each node is 100 packets. Different points in Fig.4–2corresponds to different sending
rate of flows. It can be seen that there is a turning point in all the curves, where the channel
busyness ratio is about 0.95. Before that point, as the input traffic increases, the throughput
keeps increasing, the delay (including queueing delay, backoff time and transmission time)
and delay variation does not change much and is small enough to support the real-time
traffic. After that point, the throughput drops quickly and the delay and delay variation
increase dramatically. Clearly, this turning point is the optimal operating point that we
should tune the network to work around, where the throughput is maximized and the delay
and delay variation are small. Therefore, the network status is known by keeping track of
the channel busyness ratio.
Further, if we denote byBU the channel utilization corresponding to the optimal point,
we can estimate the available normalized throughput bysa = (BU−Rb)×data/Tsuc before
the network achieves the maximal throughput. As shown in our work [150], BU is almost
71
Table 4–1:IEEE 802.11 system parameters
Bit rate for DATA packets 2 MbpsBit rate for RTS/CTS/ACK 1 MbpsPLCP Data rate 1 MbpsBackoff Slot Time 20µs
SIFS 10µs
DIFS 50µs
Phy header 192 bitsMAC header 224 bitsDATA packet 8000 bits + Phy header + MAC headerRTS 160 bits + Phy headerCTS, ACK 112 bits + Phy header
0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
(a) Channel busyness ratio
Nor
mal
ized
thr
ough
put
0 0.2 0.4 0.6 0.8 110
-3
10-2
10-1
100
(b) Channel busyness ratio
Mea
n of
del
ay (
s)
0 0.2 0.4 0.6 0.8 1
10-4
10-2
100
(c) Channel busyness ratio
Sta
nada
rd d
evia
tion
of d
elay
(s)
Figure 4–2:Simulation results when number of nodes equals 50 and RTS/CTS mechanism
is used
the same for different number of active nodes and packet size, andBU ≈ 0.90 (without
RTS/CTS) orBU ≈ 0.95 (with RTS/CTS).
4.3.3 Measurement of Channel Busyness Ratio
According to the definition ofRb, it is easy to conduct the measurement since the
IEEE 802.11 is a CSMA-based MAC protocol, working on the physical and virtual carrier
sensing mechanisms. The channel is determined to be busy when the measuring node is
sending, receiving, or its network allocation vector (NAV) [68] indicates the channel is
busy, and to be idle otherwise.
4.4 CARC: Call Admission and Rate Control
As revealed in previous sections, keeping the channel busyness ratio close to a certain
threshold is essential to maximizing network throughput and providing QoS. To accomplish
this goal, it is crucial to regulate total input traffic through call admission control (CAC)
72
over real-time traffic and rate control (RC) over best effort traffic, given that the 802.11
DCF protocol is designed to provide best effort services and does not differentiate any
types of services.
We thus propose a call admission and rate control (CARC) scheme, which consists
of two mechanisms: CAC and RC. In what follows, the design rationale is discussed first,
followed by detailed descriptions of the CAC and RC algorithm in order.
4.4.1 Design Rationale
The goal of an effective call admission and rate control scheme is to provide QoS for
real-time traffic, and to allow best effort traffic to make full use of the residual channel
resource. In the context of the WLAN where each node only has a partial view of the net-
work, however, the design of CARC is much more complicated than it appears, especially
due to the following difficulties.
The first problem is that multiple new real-time flows may be simultaneously admit-
ted by individual nodes if not coordinated, henceforth referred to asover-admission. To
mitigate this problem, each node can randomly back off to delay a new flow that could be
admitted. During the backoff period, each node keeps monitoring the channel busyness
ratio; if the measured channel busyness ratio is increased (due to the admission of new
flows by other nodes) such that the previously could-be-admitted but delayed new flow can
no longer be accepted, the flow is rejected. Another way is that each node, after admitting
a new flow, drops the flow if later on the measured channel busyness ratio is found to be
greater than the maximum channel utilization. In this case, however, the QoS level of the
real-time flows admitted earlier have already been suffered.
Another more severe issue is that it is very hard for each individual node to accurately
estimate the total traffic rate of the currently admitted real-time flows based on the mea-
sured channel busyness ratio, since the latter also includes the contribution from best effort
traffic. Without an accurate estimate, the rate of best effort traffic cannot be effectively
73
controlled. This in turn may completely cause the CAC algorithm to reject any real-time
traffic if the channel busyness ratio is boosted to a high level by heavy best effort traffic.
Therefore, to achieve its goal, the CARC scheme must properly address these prob-
lems. To completely avoid the over-admission problem, we opt for a coordinator-aided
CAC scheme. In other words, all admission decisions are made by a coordinating node,
which can record the current number of admitted real-time flows and their occupied chan-
nel bandwidth in the network. Clearly, in this way no over-admission will occur. It is im-
portant to note that a coordinator is available whether the wireless LAN is working in the
infrastructure mode or in the ad hoc mode. If the network is working in the infrastructure
mode, the access point is the coordinator. Otherwise, a mobile node can be elected to act as
the coordinator in the network using one of many algorithms in the literature ([49, 116]).
Further discussions on the election algorithm is beyond the scope of this chapter.
Since the 802.11 DCF is not prioritized, our CAC algorithm guarantees a uniform
QoS level in terms of delay, delay variation, and packet loss rate for all real-time traffic.
Note that two criteria are applied to CAC. The first criterion is that CAC admits a new real-
time flow only if the requested resource is available. Here we need to set an upper bound,
denoted byBM , for bandwidth reservation for real-time traffic [33]. We setBM to 80% (it
could be changed depending on traffic composition) of the maximum channel utilization,
denoted byBU , of the WLAN for two reasons. It first ensures that the best effort traffic is
operational all the time, since the best effort traffic is at least entitled to 20% of the channel
throughput. In addition, the 20% of the channel throughput for best effort traffic can be
used to accommodate sizable fluctuations caused by VBR real-time traffic. The second
criterion is that the QoS provided for the currently existing real-time flows is not affected.
This can be guaranteed as long as the first criterion is in place to make sure the collision
probability is kept around a small value as shown earlier.
For best effort traffic, the rate control (RC) scheme must ensure two things. First, best
effort traffic should not affect the QoS level of the admitted real-time traffic. Second, best
74
effort traffic should have access to the residual bandwidth left by real-time traffic in order
to efficiently utilize the channel. Clearly, both demand an accurate estimate of the instan-
taneous rate of ongoing real-time traffic. If the network is working in the infrastructure
mode, this is achievable. In this case, since all communications must go through the access
point, it can monitor the traffic in both directions, i.e., the upstream flows that are from
mobile nodes to the access point, and the downstream flows that are from the access point
to mobile nodes. On the other hand, if the network is working in the ad hoc mode, accurate
rate control becomes much more difficult. In this case, since all mobile nodes can directly
communicate with each other, no node has perfect knowledge of the instantaneous traffic
rate of the real-time traffic as the access point does. At the same time, no single node can
accurately monitor all the traffic in the air and control the traffic rate of every other node.
Therefore, an effective distributed rate control scheme is needed for the ad hoc mode.
4.4.2 Call Admission Control
In the CAC scheme, three parameters,(TR, TRpeak, len), are used to characterize the
bandwidth requirement of a real-time flow, whereTR is the average rate andTRpeak the
peak rate, both in (bit/s), andlen is the average packet length in bits. For CBR traffic,
TR = TRpeak. For VBR traffic,TR < TRpeak. We use the channel utilizationcu that a
flow will occupy to describe the bandwidth requirement, and
cu = U(TR) =TR
len× Tsuc, (4.8)
whereU is the mapping function from traffic rate to channel utilization, andTsuc is defined
in equation (4.1) or (4.2). Thus (cu, cupeak) specify a flow’s bandwidth requirement, where
cu = U(TR) andcupeak = U(TRpeak).
On the side of the coordinator, the total bandwidth occupied by all admitted real-
time flows is recorded in two parameters, i.e., the aggregate (cu, cupeak), denoted by (cuA,
cupeakA), which are updated when a real-time flow joins or leaves through the following
admission procedure.
75
When receiving a real-time connection request from its application layer, a node must
send a request with specified (cu, cupeak) to the coordinator, noting that it wants to establish
a real-time flow. Only after the request is admitted, the node starts to establish the flow
with the intended destination. Otherwise, the node rejects the request and informs the
corresponding application.
Upon receiving a QoS request with parameters (cu, cupeak), the coordinator checks
whether the remainder of the quotaBM can accommodate the new real-time flow. Specifi-
cally, it carries out the following:
• If cuA
+ cu < BM andcupeakA+ cupeak < BU
1 , the coordinator issues the “con-
nection admitted” message, and updates (cuA, cupeakA
) accordingly;
• Otherwise, the coordinator issues the “connection rejected” message.
Finally, when a real-time flow ends, the source node of the flow should send a “con-
nection terminated” message to the coordinator, and the latter responds with a “termination
confirmed” message and updates (cuA, cupeakA
) accordingly.
Note that real-time packets have highest priority in the outgoing queue, which means
they will always be put on the top of the queue. Meanwhile, all the control messages related
to connection admission and termination are transmitted as best effort traffic; however, they
have higher priority than other ordinary best effort packets, which have the lowest priority.
By doing so, we make sure that these messages do not affect the real-time traffic while
being transmitted promptly.
1 Note that this criterion can provide QoS guarantees for VBR real-time traffic, althoughit is conservative ifcupeakA
/cuA
is much larger thanBU/BM . This problem could bealleviated if we use measured values ofcu
Aor cupeakA
; however, it is well known thatwhen the number of present real-time flows is small, the CAC must also be conservative inorder not to cause serious QoS degradation [79]. We will further investigate this issue inour future work.
76
4.4.3 Rate Control
Rate control in infrastructure mode
We adopt a sliding window smoothing algorithm to estimate the aggregate instanta-
neous bandwidth requirement of the real-time trafficcuAr
. Let us denote bytiint the period
between the(i − 1)-th andi-th successful packet transmission or reception at the access
point, and denote bytireal the time consumed by real-time traffic in this period. Apparently,
if the i-th packet is a TCP packet,tireal = 0. Thus we have
cuAri=
∑ij=i+1−k tireal
/∑ij=i+1−k tiint
, (4.9)
wherek is the sliding window size. Thus the instantaneous available bandwidth for best
effort traffic, denoted bycubi, is
cubi = BU − cuAri(4.10)
If the recentk packets are all TCP packets, thencuAri= 0 and all the bandwidth will be
allocated to TCP flows. Once a real-time packet which has higher priority in the outgoing
queue is transmitted or received, the rate of TCP flows will be decreased. This algorithm
thus effectively adapts TCP rate to the change of VBR traffic rate. Clearly, ifk is small, the
estimation is aggressive in increasing TCP rate; ifk is large, the estimation is conservative
[79]. We setk to 10 in our simulation as a tradeoff.
Given cub, the task is to fairly allocate the bandwidth to all the nodes that have the
best effort traffic to transmit. We assume the number of nodes that are the sources of
downstream flows isnd, and the number of nodes that are the sources of upstream flows
is nu. Obviously, the access point knows bothnd andnu. Thus the traffic rate for the
best effort traffic allocated to the access pointTRba and that allocated to each mobile node
TRbm are as follows.
TRba = U−1(cub × nd/(nu + nd))
TRbm = U−1(cub/(nu + nd)), (4.11)
77
whereU−1 is the inverse function ofU defined in Equation (4.8).
This rate allocationTRba immediately takes effect at the access point. And the rate
allocationTRbm is piggybacked to each mobile node by using the MAC layer ACK frame
for each best effort packet from the node. In this way, the mobile node can immediately
adjust the transmission rate of its own best effort traffic. Two bytes need to be added in the
ACK frame to indicateTRbm with a unit ofRD × 2−16, whereRD is the bit rate for the
MAC layer DATA packets.
Note that the above fair allocation algorithm is only one choice for rate control. De-
pending on traffic patterns, other allocation algorithms can also be used, since the access
point can monitor the instantaneous rate of each best effort flows from/to each mobile node.
For instance, it is easy to design an algorithm that allocates different rate to different flows
by modifying Equation (4.11).
Rate control in ad hoc mode
We propose a novel, simple and effective rate control scheme for the best effort traffic
at each node. In this scheme, each node needs to monitor the channel busyness ratioRb
during a period ofTrb. Let us denote byRbr the contribution from real-time traffic toRb,
and denote byTRb the traffic rate of best effort traffic at the node under consideration, with
the initial value ofTRb being conservatively set, say one packet per second. The node thus
adjustsTRb after eachTrb according to the following:
TRbnew = TRbold× Rbt −Rbr
Rb −Rbr
, (4.12)
whereTRbnew andTRboldare the value ofTRb after and before the adjustment, andRbt
is a threshold of channel busyness ratio and is set to95% × BU . Two points are noted
on Equation (4.12). First, we see that the node increases the rate of best effort traffic if
Rb < Rbt and decreases the rate otherwise. Second, if all the nodes adjust the rate of its
own best effort traffic according to Equation (4.12), the total best effort traffic rate will be
∑TRbnew =
∑TRbold
× Rbt −Rbr
Rb −Rbr
≈ U−1(Rbt −Rbr), (4.13)
78
where∑
TRbold≈ U−1(Rb − Rbr) is due to the fact thatRs ≈ Rb as shown in Equation
(4.7) andRb−Rbr is the contribution from the total best effort traffic toRb. Thus after one
control intervalTrb, the channel utilization will approximately amount toRbt.
Apparently this scheme depends on the estimation ofRbr. Larger estimate ofRbr
results in larger increase in traffic rate whenRbt > Rb and larger decrease in traffic rate
whenRbt < Rb. On the contrary, smaller estimate ofRbr results in smaller increase in
traffic rate whenRbt > Rb and smaller decrease in traffic rate whenRbt < Rb. To avoid
overloading the wireless LAN and protect the QoS level of admitted real-time traffic, a
conservatively increasing and aggressively decreasing law is desired for controlling the
best effort traffic rate. This is especially preferred given the fact that an accurate estimate
of Rbr is not available. These considerations have led us to the following scheme to estimate
Rbr.
Each mobile node needs to monitor all the traffic in the air. Note that, to be consistent
with the original 802.11 protocol, our scheme only requires mobile nodes to decode the
MAC header part, as the original 802.11 does in the NAV procedure, instead of the whole
packet it hears. For the purpose of differentiating real-time packets from best effort packets,
one reserved bit in the subtype field of the MAC header is used. Therefore, the observed
channel busyness ratio comprises three pieces of contribution: the contribution from best
effort traffic with a decodable MAC headerRb1, that from real-time traffic with a decodable
MAC headerRb2, and that of all the traffic with an undecodable MAC headerRb3 due to
collision. So we give an upper bound, a lower bound, and an approximation forRbr as
follows:
Rb2 6 Rbr 6 Rb2 + Rb3
Rbr ≈ Rb2 × (1 + Rb3
Rb1+Rb2) = Rb2×Rb
Rb1+Rb2≡ Rbr
, (4.14)
where we assumeRb3 is composed of real-time traffic and best effort traffic according to
the ratio ofRbr/Rb.
79
To enforce a conservatively increasing and aggressively decreasing law, we thus set
Rbr as follows:
Rbr =
Rb2, if Rb < Rbt;
Rb2 + Rb3, if Rb > Rbt.(4.15)
We also need to determine the control intervalTrb distributedly. To be responsive to
the change of the channel busyness ratio observed in the air, the rate is adjusted at each
time instant when a node successfully transmits a best effort packet. ThusTrb is set to the
interval between two successive best effort packets that are successfully transmitted. Note
that even when such an interval is short and thus no real-time traffic is observed in it, i.e.,
Rbr = 0, the rate of best effort traffic can at most be increased toU−1(Rbt). At that time,
the collision probability is still very small according to previous analysis, so the real-time
packets later on can be quickly transmitted, which will in turn lower the best effort traffic
rate.
4.5 Performance Evaluation of CARC
We have implemented the CARC scheme inns-2simulator [106]. In this section, we
evaluate its effectiveness in an 802.11 wireless LAN.
4.5.1 Simulation Configuration
An 802.11 based wireless LAN with 100 mobile nodes is simulated. In all simulations,
channel rate is 2 Mb/s and simulation time is 120 seconds. The queue length at each node
is 100 packets. The IEEE 802.11 system parameters are summarized in Table4–1
To model multimedia traffic, three different classes of traffic are considered:
Voice Traffic (VBR):The voice traffic is modeled as VBR using anon/off source with
exponentially distributedon andoff periods of 300 ms average each. Traffic is generated
during theon periods at a rate of 32 kb/s with a packet size of 160 bytes, thus the inter-
packet time is 40 ms.
Video Traffic (CBR):The video traffic is modeled as CBR traffic with a rate of 64 kb/s
with a packet size of 1000 bytes, thus the inter-packet time is 125 ms.
80
Data Traffic Model (UBR):We use the greedy best-effort TCP traffic as the back-
ground data traffic with a packet size of 1000 bytes.
During simulation, the RTS/CTS mechanism is used for video and TCP packets, but
not used for voice packets because of its relatively large overhead. The traffic load is grad-
ually increased, i.e., a new voice, video or TCP flow is periodically added in an interleaved
way, to observe how CARC works and the effect of a newly admitted flow on the per-
formance of previously admitted flows. Specifically, until 95 second, a new voice flow is
added at the time instant of6 × i second(0 6 i 6 15). Likewise, a video flow is added
two seconds later and a TCP flow is added 4 seconds later. Furthermore, to simulate the
real scenario where the start of real-time flows are randomly spread over time, the start of
a voice flow is delayed a random period uniformly distributed in [0ms, 40ms], and that of
a video flow delayed a random period uniformly distributed in [0ms, 125ms]. Note that in
the simulation period between [95ms, 120ms], we purposely stop injecting more flows into
the network in order to observe how well CARC performs in a steady state.
Two scenarios shown below are investigated.
Infrastructure Mode: In this case, all flows pass through the access point, whereby
half number of flows are downstream, and another half are upstream. The sources or the
destinations of these flows which are not the access point are randomly chosen from all the
mobile nodes other than the access point.
Ad Hoc Mode: In this case, there is no fixed access point. Therefore, the sources and
the destinations of all flows are randomly chosen from all the mobile nodes. All the other
parameters are the same as those in the infrastructure mode.
4.5.2 Simulation Results
From the simulation results, we find there are a total of 12 voice flows and 11 video
flows admitted at 66 second; and no more voice or video flows are admitted thereafter. The
number of TCP flows increases by one every 6 s until 95 second. After 95 s, as expected,
there is no change in the number of flows.
81
As can be calculated using Equation (4.8), each voice flow contributes0.0347 to the
channel busyness ratioRb, and each video flow0.04339 by noticing that each packet is
added a 20 bytes IP header inns-2. Thus after 12 voice and 11 video flows are admitted,
the portion ofRb that accounts for the voice flows is0 ∼ 0.38, with a mean of0.19, and
the portion that accounts for the video connections is0.52. ThusU(TRA) = 0.71, and
U(TRApeak) = 0.90. Thereafter, the admission control mechanism starts to reject future
real-time flows.
Infrastructure mode
Fig. 4–3(a)shows the throughput for the three traffic types throughout the simulation.
At the beginning, the TCP traffic has high throughput; then as more real-time flows are
admitted, it gradually drops as a result of rate control. Because we set an upper bound
BM for real-time traffic, it can be observed that when the traffic load becomes heavy, TCP
traffic, as desired, is not completely starved. Because TCP traffic is allowed to use any
available channel capacity left by real-time traffic, the total channel throughput, namely
the sum of the throughput due to different types of traffic, always remains steadily high.
Note that the throughput for the TCP traffic does not include the contribution from TCP
ACK packets, even though they also consume channel bandwidth to get through. Thus, the
total channel throughput should be somewhat higher than the total throughput as shown in
Fig. 4–3(a).
The end-to-end delay is illustrated in Fig.4–3(b), in which every point is averaged
over 2 seconds. It can be observed that the delay for real-time traffic is always kept below
20 ms. Initially, as the number of admitted real-time flows increases, the delay increases.
Note that the increase of delay is not due to TCP traffic, but due to the increasing number
of competing real-time flows. Then, the delay oscillates around a stable value. Fig.4–
3(c) presents the delay distribution for voice and video traffic. More detailed statistics of
delay and delay variation are given in Table4–2 and Fig. 4–4. As shown in Table4–2,
the 97 percentile delay value for voice and video is 35.5 ms and 32.2 ms respectively, and
82
0 20 40 60 80 100 1200
0.2
0.4
0.6
0.8
1
Time (s)
End
-to-
end
Thr
outh
pgut
(M
bps)
VBR VoiceCBR VideoTCP
(a) Aggregate throughput
0 20 40 60 80 100 1200
0.005
0.01
0.015
Time (s)
End
-to-
end
Del
ay (
s)
VoiceVideo
(b) Average end-to-end delay of voice and video traffic
0 0.05 0.1 0.15
10-4
10-3
10-2
10-1
100
x (s)
Pr
(end
-to-
end
dela
y >
x)
VoiceVideo
(c) End-to-end delay distribution of voice and videotraffic
Figure 4–3:Infrastructure mode: the number of real-time and TCP flows increases overtime. Channel rate is 2 Mbps.
the 99 percentile delay value for voice and video is 55.4 ms and 45.2 ms respectively. It
is known that for real-time traffic, packets that fail to arrive in time is simply discarded.
Given the allowable1% ∼ 3% packet loss rate, these delays are well within the bounds
given in Section4.2.2. The good delay performance indicates that the CARC scheme can
effectively guarantee the delay and delay jitter requirements of real-time traffic, even in the
presence of highly dynamic TCP traffic.
Finally, we note that in simulation, no lost real-time packet is observed. This should
be accredited to the fact that our CARC scheme successfully maintains a very low collision
83
Figure 4–4:End-to-end delay of all voice and video packets in infrastructure mode
Table 4–2:The mean, standard deviation (SD), and 97’th, 99’th, 99.9’th percentile delays(in seconds) for voice and video in the infrastructure mode.
mean SD 97 %ile 99 %ile 99.9 %ileVBR Voice 0.0097 0.0089 0.0306 0.0412 0.0670CBR Video 0.0127 0.0081 0.0314 0.0392 0.0609
probability, thereby avoiding packet losses due to collisions. Also, since the network is
tuned to work in the optimal point, no real-time packet is lost due to buffer overflow.
Ad hoc mode
Fig. 4–5illustrates the performance of the CARC scheme when it works in the ad hoc
mode. Again, the performance is very good. The CARC scheme delivers almost the same
throughput and average end-to-end delay, and also no lost real-time packet is observed.
However, as seen from Fig.4–5(c), the delay variation is slightly larger, which is also
confirmed in Table4–3and Fig.4–6. This is due to the imperfect estimation of the rate of
real-time traffic in the ad hoc mode, as each node locally estimates the rate.
Fig. 4–7demonstrates that the rate control scheme achieves a stable and high channel
utilization, i.e., around90%, when the number of voice, video, TCP flows or active nodes
varies and the packet size for different types of traffic is different. The channel utilization
is calculated by summing up all the contribution of the voice, video, TCP DATA and TCP
ACK packets to the channel utilization according to the end-to-end data rate as shown in
Fig. 4–5(a)and equation4.8.
84
0 20 40 60 80 100 1200
0.2
0.4
0.6
0.8
1
Time (s)
End
-to-
end
Thr
ough
put
(Mbp
s)
VBR VoiceCBR VideoTCP
(a) Aggregate throughput
0 20 40 60 80 100 1200
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
Time (s)
End
-to-
end
Del
ay (
s)
VoiceVideo
(b) Average end-to-end delay of voice and video traffic
0 0.05 0.1 0.15
10-4
10-3
10-2
10-1
100
x (s)
Pr
(end
-to-
end
dela
y >
x)
VoiceVideo
(c) End-to-end delay distribution of voice and videotraffic
Figure 4–5:Ad hoc mode: the number of real-time and TCP flows increases over time.Channel rate is 2 Mbps.
Thus, our rate control scheme for ad hoc mode provides another kind of distrib-
uted solution to maximizing the network throughput besides the methods in the papers
[12, 13, 16, 20, 85, 90]. However, unlike these previous approaches, ours does not change
the media access mechanism in DCF protocol and has a stable performance under different
number of active nodes and different packet size in the presence of CBR, VBR and TCP
best effort traffic.
85
Figure 4–6:End-to-end delay of all voice and video packets in ad hoc mode
Table 4–3:The mean, standard deviation (SD), and 97’th, 99’th, 99.9’th percentile delays(in seconds) for voice and video in the ad hoc mode.
mean SD 97 %ile 99 %ile 99.9 %ileVBR Voice 0.0101 0.0104 0.0350 0.0500 0.0876CBR Video 0.0133 0.0092 0.0337 0.0477 0.0903
In conclusion, the simulation results demonstrate our CARC scheme performs well
when the network operates either in the infrastructure mode or in the ad hoc mode. Con-
sequently, the strict QoS of real-time traffic is statistically guaranteed and the maximum
channel utilization is closely approached.
4.6 Discussions
So far it is assumed the channel is perfect, i.e., no packet is lost due to channel fading.
In this section, we comment on the impact that channel fading may have on the performance
of CARC. Also, we discuss the implications that arise when prioritized DCF rather than
pure DCF is employed.
4.6.1 Impact of Fading Channel
When channel fading is figured in, packet losses are no longer due to collisions only;
they may well be caused by channel fading. If the input traffic remains the same as in
the case of no channel fading, the retransmissions of lost packets due to channel fading,
denoted byλretx, actually increase the input traffic rate over the channel, which becomes
λ + λretx. By keeping the channel busyness ratio below the maximum of channel utiliza-
tion, the rate control scheme could automatically decrease the traffic rateλ from higher
86
0 20 40 60 80 100 1200
0.2
0.4
0.6
0.8
1
Time (s)
Cha
nnel
Util
izat
ion
Figure 4–7:Channel utilization in ad hoc mode
layer. And the call admission control scheme could also considerλretx when issues the ad-
missions. Thus the whole CARC scheme could effectively suppressed the adverse efforts
caused by channel fading and still deliver a comparable QoS performance.
It is important to note that normally channel fading is not a serious problem in the
WLAN, which features low node mobility and relatively stable channel. However, if the
packet error probability due to channel fading becomes significant, the QoS level will be
hurt. However, our proposed CARC, by considering theλretx, can still effectively control
the total input traffic rate and hence maintain a very small collision probability to guarantee
the 802.11 MAC provides the best QoS level it can support in this case. Of course, if
channel fading is serious enough, this best QoS level may not satisfy the QoS requirement
of real-time traffic.
4.6.2 Impact of Prioritized MAC
Without changing the original medium access mechanism in the 802.11 DCF, the best
approach to guaranteeing QoS of real-time traffic is taking advantage of traffic regulation,
such as admission control over real-time traffic and rate control over best effort traffic, so
that the network is working at the optimal point. Clearly, within either real-time traffic
87
or best effort traffic, no differentiation is committed. As a result, all the real-time traf-
fic, including CBR and VBR traffic, equally shares the delay and delay variation, which
sometimes is not flexible enough.
If a prioritized 802.11 MAC protocol similar to the schemes [1, 125] is adopted, we
are able to provide priority within real-time traffic. As a result, the high priority real-time
traffic receives smaller delay variation, whereas the low priority real-time traffic receives
higher delay variation [33]. Of course, to fully exploit the potential of the prioritized MAC
and meet different QoS requirements, the admission control and rate control algorithms
proposed here should control the aggregate rate of each class of traffic so that collisions
within each class is small enough to guarantee that its QoS requirement is not violated.
4.7 Conclusion
As a continuation of our previous work [150], in this chapter we have proposed a sim-
ple and effective call admission control and rate control scheme (CARC) to support QoS
of real-time and streaming traffic in the 802.11 wireless LAN. Based on the novel use of
the channel busyness ratio, which is shown to be able to characterize the network status,
the scheme enables the network to work at the optimal point. Consequently, it statisti-
cally guarantees stringent QoS requirements of real-time services, while approaching the
maximum channel utilization.
Furthermore, the rate control scheme for ad hoc mode has its own virtue. It provide
another kind of distributed solution, i.e., rate control over the packets in outgoing queue
without modification to the medium access mechanism in the IEEE 802.11 DCF protocol,
to maximize the network throughput, and has stable performance under different number
of active nodes and different packet size in the presence of all the CBR, VBR and TCP
traffic.
Combining the analytical results in our previous work [150] and our proposed CARC
scheme, we therefore make it clear that the IEEE 802.11 WLAN can provide statistical
QoS guarantees, not just differentiated service, for multimedia services.
CHAPTER 5DISTRIBUTED FAIR AND EFFICIENT RESOURCE ALLOCATION WITH QOS
SUPPORT OVER IEEE 802.11 WLANS
Recent years have seen increasingly growing users in wireless local area networks.
To better meet user needs, various types of applications including voice over IP (VoIP),
streaming multimedia, and data services are expected to be supported. However, the IEEE
802.11 distributed coordination function (DCF) standard has long been known as inefficient
and unfair in the presence of many concurrent users. Furthermore, despite the availability
of many service differentiated schemes, quality of service (QoS) for real-time services is
still not well supported.
To address the efficiency, fairness and QoS, we propose a distributed resource alloca-
tion (DRA) framework on top of 802.11. DRA relies on the novel use of channel busyness
ratio (BR) as the network status indicator, which can be easily obtained in 802.11. Based
on BR, a novel three-phase control mechanism is proposed to fairly and efficiently utilize
network resource and guarantee a short medium access delay. DRA also integrates the
three-phase control mechanism with a call admission control scheme and a packet concate-
nation scheme into a single unified framework to better support QoS and multiple channel
rates besides the efficiency and fairness. Extensive simulations demonstrate that DRA
achieves near optimum throughput and the performance is stable regardless of the number
of nodes and packet sizes. Compared to 802.11, it improves throughput by as high as 71%
with RTS/CTS and 157% without RTS/CTS. Fairness is well achieved. Moreover, DRA
can provide statistical QoS guarantee for real-time services.
5.1 Introduction
In recent years, IEEE 802.11 [68] based wireless local area networks (WLANs) have
been widely deployed due to their low cost and easy accessibility. Distributed coordination
88
89
function (DCF), the fundamental channel access method in the 802.11 MAC, is based on
carrier sense multiple access with collision avoidance (CSMA/CA). Briefly speaking, it
uses a binary exponential backoff (BEB) scheme to reduce the collision probability and the
RTS/CTS exchange before data transmission to shorten the collision periods of long data
packets. While it works well in supporting traditional best-effort traffic, it is inadequate in
dealing with several critical issues that arise as a result of the ever-increasing number of
users and their diverse service needs.
First, although the data rate of WLANs has increased dramatically, it is still believed
to lag behind the increasing bandwidth demands. Consequently, the networks easily enter
saturation. In addition, due to the popular use of WLAN technology, it is not uncommon
to see a large number of concurrent users accessing the WLAN through the same access
point (AP), such as in conference halls or classrooms. However, the inherent deficiency
of DCF in supporting many users concurrently under heavy traffic load always results in
severe packet collisions and hence greatly degrades network throughput ([21, 85, 160, 15]).
Therefore, efficiently coordinating simultaneous channel access by many users is impor-
tant.
Second, network usage that used to be dominated by web traffic has shifted dramati-
cally with significant increases in VoIP, streaming multimedia traffic and peer-to-peer traffic
[62]. Coming together with this shift is the demand for QoS support in WLANs. However,
supporting QoS in wireless channels is difficult given a number of challenges [125]. Be-
sides the wireless channel errors, the network throughput varies with the level of channel
collision; so does the packet delay.
Third, given the conflict between the large number of users and the relatively “limited”
channel capacity and the diverse QoS requirements imposed by a range of applications,
fairly allocating channel resource to all users is highly desirable. While many good fair
queueing schemes have been proposed for the wired networks and cellular networks [166,
97], they may not be directly applicable to WLANs if the networks operate in ad hoc
90
mode. In this case, due to the lack of a centralized controller, no node in the network
has a global view of the network status, such as the number of contending nodes or the
traffic situation at each independent node. Clearly, this dictates a distributed fair control
mechanism. Furthermore, a distributed mechanism has several benefits compared to a
centralized one, such as the avoidance of single point-of-failure and scalability.
In the literature, enormous efforts have been spent in dealing with each of the above
issues. Although significant progress has been made, none of them is completely addressed
(more details are given in Section5.6). While the philosophy of breaking up a big problem
into several smaller ones and addressing each individually is common and effective, a joint
and systematic study of these issues may lead to a good unified solution for the following
reasons.
• The issues mentioned above are actually correlated. For instance, when packet col-
lision becomes severe, many packets will either experience increased delay or be
dropped due to consecutive failed attempts. As a result, not only the throughput is
degraded, but also the packet delay and packet loss rate are affected so much that
the QoS requirements are violated. Fairness may also be hurt if applications respond
to packet losses in different ways. In this sense, a solution that is designed with all
those issues taken into account is justified.
• A unified solution would be conceptually simple and more efficient in practice.
While each individual solution might work well in solving one problem, one can-
not guarantee that they will work equally well when being put together. In fact, it is
not unusual that one solution counteracts or weakens another’s effectiveness or effi-
ciency. A typical example is the tradeoff between throughput and fairness; normally
one is improved at the expense of the other. On the other hand, by studying those
correlated issues together, one might have a better understanding of the root causes
for them and thus possibly obtain a solution that performs better in addressing each
individual issue.
91
Motivated by those potential benefits, we endeavor to seek a unified solution to the
three issues. To the best of our knowledge, there has been no such study thus far. More-
over, considering the limited processing power/battery power at each mobile node and the
distributed nature of DCF, it is desirable that any solution aimed at enhancing DCF is
simple and fully distributed. In fact, simplicity and distributed control are the very charac-
teristics that contribute to the unprecedented success of WLANs. Also, the solution with
as few as possible modifications to 802.11 is preferred for the purpose of compatibility.
In this chapter, we conduct a comprehensive study of the issues of efficiency, fairness
and QoS and propose a novel distributed resource allocation (DRA) framework for 802.11
WLANs, which is fair, efficient, and QoS-capable.
The contribution of this chapter is twofold. First, using the channel busyness ra-
tio to characterize the network status, we develop a novel three-phase control mechanism
to dynamically allocate network resource in terms of channel time. By well controlling
channel collisions, this mechanism achieves near optimum throughput, fair resource allo-
cation, and small MAC delay. Theoretical analysis shows that the multiplicative-increase
phase quickly leads to the convergence to high efficiency, and the additive-increase and
multiplicative-decrease phases quickly lead to the convergence to fairness. Second, to bet-
ter support QoS and multiple channel rates, we further propose a unified framework, DRA,
which incorporates the three-phase control mechanism and several other mechanisms. By
conducting call admission control over real-time services and properly adjusting the net-
work resource among real-time and non-real-time services, DRA can support statistical
QoS for real-time services with short delay and zero packet loss rate. In DRA, we also de-
velop a packet concatenation scheme to reduce the relatively high control overhead when
multiple channel rates coexist. Extensive simulations verify the performance of DRA. In
summary, DRA has the following desirable features that distinguish itself from previous
schemes:
92
• It is fully distributed without the involvement of a centralized controller and thus is
very suited to the distributed nature of 802.11.
• It is shown by both analysis and simulation that DRA performs well in achieving
high throughput, time fairness and QoS support.
• It only requires simple calculation of the channel busyness ratio from the MAC layer.
Other than that, no modification is made to 802.11, which is desirable for real-world
deployment.
The rest of this chapter is organized as follows. The design rationale for DRA is given
in Section5.2. We describe DRA in detail in Section5.3. The convergence analysis is
presented in Section5.4. The performance evaluation is conducted in Section5.5. Section
5.6discusses the related work. Finally, Section5.7concludes this chapter.
5.2 Design Rationale
5.2.1 Efficiency and QoS
It is well-known that a contention-based 802.11 WLAN could suffer low throughput,
long packet delay, and high packet loss rate under heavy traffic load. Therefore, our initial
objective is to reexamine the potentials of a 802.11 based WLAN in order to achieve high
throughput and support QoS in a network.
For DCF, the network throughputS can be expressed as the average payload size
transmitted in a time slot divided by the average length of a time slot. Then, following the
techniques of Bianchi’s paper [15], we can derive the throughput:
S = PtrPsE[D](1−Ptr)σ+PtrPsTs+Ptr(1−Ps)Tc
Ptr = 1− (1− τ)n
Ps = nτ(1−τ)n−1
Ptr
p = 1− (1− τ)n−1
(5.1)
In Equation (5.1), E[D] is the expected payload size,Ts is the average successful trans-
mission time,Tc is the average collision time,σ is a MAC layer idle slot time,τ is the
93
transmission probability of each node in any slot,n is the total number of nodes in the
WLAN, andp is the collision probability that a node encounters collision whenever trans-
mitting.
Focusing on the saturated case where each node always has packets in its queue await-
ing transmissions, Bianchi derived the formula forτ ,
τ = 2(1−2p)(1−2p)(CWmin+1)+pCWmin[1−(2p)m]
m = logCWmax
CWmin,
(5.2)
whereCWmin andCWmax are the initial and maximum contention window. By Equations
(5.1) and (5.2), τ , p andS can be solved.
However, the maximum throughput is not necessarily achieved in the saturated case.
Denote byp′ andS ′ the values ofp andS for the saturated case, then the actual collision
probability could be less thanp′ if not all the nodes in the network are contending for the
channel at the same time, potentially leading to a throughput higher thanS ′. Note the
expression ofS in Equation (5.1) is general enough and can be applied to the non-saturated
case as well. To obtain the maximum value ofS, denoted byS∗, and the corresponding
value ofp, denoted byp∗, we can rewriteS as a function ofp and let
d
dpS = 0 (5.3)
Further, we denote bypr the root of Equation (5.3). Since givenn, p is upper bounded by
p′, we obtainp∗ as
p∗ = min(pr, p′) (5.4)
Both the maximum and saturated throughput are plotted in Fig.5–1using the same formu-
las ofTs andTc as used by Bianchi [15].
It is clear that the maximum throughput cannot be achieved in the saturated case es-
pecially whenn is not small. It is important to note that the maximum throughput is not
sensitive to the number of nodes. Equation (5.4) implies that somehow if we can tunep
94
0 50 100 150 200 250 3003.4
3.8
4.2
4.6
Thr
ough
put
(Mbp
s)
Maximum throughputSaturated throughput
Number of nodes n
Figure 5–1:Maximum and saturated throughput with different number of nodes (RTS/CTSis used, packet length = 1000bytes, channel rate = 11Mbps)
to approachp∗, the network can attain the maximum throughput. However, sincep is not
easily obtainable and controllable, we are forced to seek a good alternative. Letbr denote
thechannel busyness ratio, i.e., the ratio of the time when the channel is busy, and it can
be expressed as
br =PtrPsTs + Ptr(1− Ps)Tc
(1− Ptr)σ + PtrPsTs + Ptr(1− Ps)Tc
(5.5)
It can be trivially shown thatbr is an injective function ofp; moreover, it can be easily
measured since 802.11 is based on carrier sensing. Denote bybr∗ the corresponding value
of br whenp = p∗, we thus can tune the network to work atbr∗.
Recently, we [150] have theoretically shown thatbr∗ is relatively stable and around
0.90 ∼ 0.98, and ifbr 6 br∗, the delay is good enough to support statistical QoS. It is thus
reasonable to make the network deliver high throughput and support QoS by allowingbr
to approachbr∗ and ensuringbr 6 br∗.
5.2.2 Fairness
We now describe the fairness criterion for DRA. 802.11 DCF is designed to achieve
long-term fairness in terms of the transmission opportunity, i.e., every node in the network
will have equal opportunities to gain the channel access and transmit packets in a infi-
nitely long period. If each node uses the same packet size, this fair channel access will
translate into throughput fairness. This approach, however, has several problems in reality.
95
First, throughput unfairness might arise if nodes use different packet size, which seems in-
evitable as the supported services in WLANs become diversified. Moreover, the aggregate
throughput will also be hurt if some nodes use a very small packet length and hence intro-
duce large control overhead. Second, since the backoff process in DCF always favors those
nodes that just successfully transmitted a packet, severe throughput unfairness might hap-
pen in short periods. Third, as 802.11 supports multiple channel bit rates, if different nodes
use different rates, this approach will penalize nodes with high rates and significantly lower
the aggregate throughput, as revealed by Heusse etc. [63]. Therefore, we aim to achieve
time fairness [121]. Specifically, DRA is designed to efficiently and fairly allocate the
channel time among nodes. This time fairness model can avoid the above problems with
the throughput fairness model. In addition, it can strike a good balance between efficiency
and fairness due to varing channel qualities. With equally allocated channel time, nodes
with better channel quality can get higher throughput with higher physical channel bit rate.
5.3 Distributed Resource Allocation (DRA)
We present DRA in detail in this section. In essence, DRA is responsible for deciding
when and how packets are passed from the transmit queue to the MAC layer1 . Therefore,
we can consider DRA to be a control entity lying on top of the MAC layer.
At first, we describe the basic framework. In this framework, it is assumed that there
exists a call admission controller ensuring that the admitted real-time traffic is less than the
network capacity in order to support QoS. Therefore, in the following description, we first
focus on how to adjust the sending rate of non-real-time traffic to achieve high efficiency,
good fairness and short MAC delay.
Then we explain how DRA supports QoS and multiple channel rates by incorporat-
ing necessary components such as call admission control, priority queue, and the channel
adaptive packet concatenation mechanism.
1 Without loss of generality, we decouple the transmit queue from the MAC layer.
96
5.3.1 Basic Framework
Resource Definition and Traffic Sending Rate
Since we aim for fair allocation of channel time, theresourceallocated to each node,
denoted byr, is the allowablechannel time occupation ratiofor this node. That is, the
portion of channel time in a unit period. Apparently,0 6 r 6 1. According tor, DRA
schedules the time when it passes each packet to the MAC layer.
We definechannel time for packet p, denoted bytp, as the time that a successful
transmission of packetp will last over the channel. According to the 802.11 standard, we
thus have
tp = rts + sifs + cts + sifs + data + sifs + ack + difs (5.6)
for the case where the RTS/CTS mechanism is used, and
tp = data + sifs + ack + difs (5.7)
for the case where there is no RTS/CTS mechanism. In Equations (5.6) and (5.7), rts, cts,
data, andack are the corresponding transmission times for MAC frames RTS, CTS, DATA
and ACK, respectively;sifs anddifs are the mandatory inter-frame spaces between these
frames. Note thatdifs is included intp because each node is required to observe the
channel idle for at leastdifs long before backing off or starting new transmissions.
After calculatingtp, DRA can obtain the scheduled interval∆, the time between two
consecutive packets that DRA passes to the MAC layer:
∆ =tpr
(5.8)
In this way, the traffic rate of the non-real-time traffic at each node is determined. Note that
both the packet length and channel bit rate are figured in.
97
Initialization Procedure
When a node first joins the network, DRA sets its traffic rate asrstart and schedules a
random delay∆r before passing a packet to the MAC layer. Specifically,
∆r = u× tp/rstart (5.9)
whereu is a random number uniformly distributed in[0, 1]. This random delay is used
to avoid the following undesirable situation. When many nodes join the network at the
same time, they may all observe the idle channel and start contending for the channel
simultaneously, thereby causing severe collision.
Instantaneous Rate Update Procedure
DRA calculates the instantaneous allowable resourcerint when the MAC finishes a
packet transmission and calls back the data link control sublayer as follows:
rint =tp
tnow − tlast MAC callback time
(5.10)
wheretnow is the current time,tlast MAC callback time is the MAC callback time for the last
packet transmission, andtp is the channel occupation time returned by the MAC callback
function. Note thattp can also be calculated based on the current channel bit rate according
to Equations (5.6) and (5.7).
DRA regardsrint as the current allowable resource and calculates a new allowable
resource, denoted byrnew, with the following three-phase resource allocation mechanism.
Three-Phase Resource Allocation Mechanism
This mechanism includes three control phases:
• When the channel is underloaded, i.e.,br < BM , DRA enters intomultiplicative-
increase phase.
• When the channel is moderately loaded, i.e.,BM 6 br < brth, DRA enters into
additive-increase phase.
98
• When the channel is heavily loaded, i.e.,br > brth, DRA enters intomultiplicative
decrease phase.
In the above phases,br is the current channel busyness ratio contributed by all the
nodes in the network;brth is the channel busyness ratio that is a constant very close tobr∗;
BM is a threshold that determines where the channel status changes from underloaded to
moderately loaded.
Multiplicative-increase phase:The phase is aimed to avoid wasting the channel re-
source when the traffic load is low.rnew is adjusted as:
rnew = rint × brth
br(5.11)
Notice the summation ofrint over all nodes is equal tobr. If each node increases the
allowable resource by Equation (5.11), the channel busyness ratio will quickly converge to
brth after a period during which each node transmits one more packet on average. If some
nodes no longer increase their traffic rates due to the constraints of their applications, other
greedy nodes2 still can quickly increase the traffic rate by the multiplicative increase so
thatbr can quickly converge tobrth after transmitting one or several more packets.
Wheneverbr > BM , DRA adopts an additive-increase and multiplicative decrease
(AIMD) algorithm to converge to both high efficiency and fairness.
Additive-increase phase:In this phase,rnew is adjusted as:
rnew = rint +tprint
δ (5.12)
whereδ is the increase parameter.
Multiplicative-decrease phase:In this phase,rnew is adjusted as:
rnew = γ × rint × brth
br(5.13)
2 By greedy nodes, we mean they have enough traffic to saturate the network.
99
whereγ is a decrease parameter, and0 < γ 6 1.
The values ofδ andγ impact the convergence speed of both the efficiency and fairness.
We will discuss how to set these parameters in Section5.4.
Backoff Procedure
At the scheduled packet sending time, channel may be still very busy, i.e., there may
be already several or more nodes concurrently contending for the channel at the MAC layer.
In this case, DRA further uses a backoff procedure to avoid severe collision at the MAC
layer. Specifically, when the scheduled sending time expires, DRA will check the observed
br during the time period starting from the time instant when the last packet is sent to the
MAC layer, denoted bytlast sending time. If it is larger thanbrth, DRA will schedule an
additional delay∆d before passing a packet to the MAC layer:
∆d =br − brth
brth
(tnow − tlast sending time) (5.14)
wheretnow is the current time. Otherwise, DRA will immediately send the packet to the
MAC layer.
Acquisition of Channel Busyness Ratio
There are two time instants when the channel busyness ratio needs to be calculated
in DRA, i.e., when the MAC layer finishes a packet transmission and calls back the link
control layer or when the scheduled sending time expires.
There is already a function in the 802.11 MAC to determine whether the channel is
busy or not. The channel is considered busy whenever the node under consideration is
transmitting or receiving, or physical carrier sensing or network allocation vector (NAV)
indicates a busy channel. The channel busyness ratio can be calculated by adding up all
the busy periods and then dividing the sum by the observation period. DRA can determine
the start and end points of the observation period. It thus can be seen that the acquisition
of channel busyness ratio only requires several simple calculations at the MAC layer.
100
5.3.2 Fairness Support
In the traditional wired networks, max-min fairness [11] means that, for each session
p, the raterp cannot be increased without decreasing the rate for some sessionp′ for which
rp′ 6 rp. It is achieved when each session has a bottleneck link.
In the WLAN, all nodes share the same wireless channel and each reaches its destina-
tion via one hop. To establish the max-min fairness, one may regard the application layer as
the“bottleneck link” if it injects less traffic than the MAC can transmit in the fair share of
the channel time. DRA can achieve the new”max-min fairness”objective by successfully
transmitting all the packets from the nodes which have the bottleneck link at the application
layer. And also it allows other greedy nodes whose bottleneck link is the shared wireless
channel to fairly share all the residual channel resource.
In contrast, in the original 802.11 WLAN, packet collision could be severe if many
nodes are heavily loaded. This may lead to packet drops in two ways. First, according to
the 802.11 standard, packets could be dropped due to consecutive retransmission failures.
Second, high collision probability may cause excessively large medium access delay and
hence the buildup of the queue. Then packets will be discarded if the queue is full. 802.11
itself may fail to achieve max-min fairness by dropping packets from both greedy and non-
greedy nodes.
5.3.3 QoS Support
The three-phase resource allocation mechanism leads to a small collision probability
and a short medium access delay, thereby making it possible for DRA to support strict QoS
requirements. To better provide QoS for real-time traffic such as VoIP and streaming video
in the 802.11 WLANs, DRA incorporates a call admission control scheme and a priority
queue scheme as described below.
Call Admission Control over Real-Time Traffic
It can be seen that in DRA, a short medium access delay is achieved by dynamically
adjusting the resources allocated to each node so as to ensurebr 6 br∗. However, in order
101
for the network to support short delay and delay variation as required by real-time traffic,
this is not sufficient for the following reason. The delay a packet experiences in a WLANs
consists of the queueing delay and the medium access delay. Even though the latter can
be well controlled, the former will be excessively large if the total traffic load of real-time
traffic exceeds the network capacity. Therefore, call admission control over real-time traffic
is needed.
In this chapter, we do not detail an admission control algorithm due to space limitation.
However, some call admission control schemes recently proposed for the WLANs such as
[149] can be used.
Priority Queue Scheme for Admitted Real-Time Traffic
DRA adopts a simple priority queue scheme for the admitted real-time traffic that has
strict delay requirements. Specifically, packets from real-time applications are assigned
higher priority than those from the non-real-time traffic. They are directly sent to the MAC
layer without any delay caused by DRA if they conform to the claimed data rate negotiated
during the admission procedure. In case the admitted applications generate more packets
than they should, DRA regards these packets as non-real-time traffic and will drop them
when the channel resource is not enough.
Delay and Throughput Guarantee for Admitted Real-Time Traffic
Note that the instantaneous traffic rate of the real-time services may fluctuate from
time to time. To efficiently utilize the channel resource, DRA should increase the sending
rate of non-real-time traffic when the real-time traffic rate decreases by allocating it more
channel time. On the other hand, when the real-time traffic rate increases, DRA should
quickly decrease the non-real-time traffic rate in order not to affect the QoS of the real-
time traffic. To this end, DRA adopts theinstantaneous traffic rate update procedure,
which occurs every time a non-real-time packet is transmitted. In addition, thebackoff
procedurecan effectively delay the transmissions of non-real-time traffic so as to release
the resource to the real-time traffic if necessary.
102
5.3.4 Multiple Channel Rates Support
As we will show in Section5.4.2, different channel bit rates do not impact the conver-
gence of time fairness. In other words, even nodes use different channel rates, DRA fairly
allocates the channel time among them. As a result, nodes with higher channel rates will
be able to transmit more packets in the same time period than those with lower channel
rates, which translates into higher throughput. Clearly, DRA prevents the degradation of
aggregate throughput that occurs in multi-rate 802.11. However, in this approach, even
when the channel rate is high, only one packet can be transmitted in one DATA/ACK or
RTS/CTS/DATA/ACK handshake. If the packet size is set according to the base rate or
medium rate, unnecessary control overhead is introduced.
To further reduce the overhead and improve throughput in the allocated channel time,
DRA adopts a channel adaptive packet concatenation (CAPC) mechanism. When the chan-
nel rate is high due to a good channel condition, since channel coherence time typically ex-
ceeds multiple packet transmission times [113], DRA can concatenate several short packets
into one large packet for MAC layer transmission. The number of packets bound in a single
transmission can be as high as the ratio between the current high channel rate and the base
rate. Finally, we note that DRA can work with some existing rate-adaptive schemes, such
as [65, 113, 76], to achieve better performance.
5.4 Convergence Analysis
In this section, we first show that both the multiplicative-increase phase and the AIMD
phases will converge quickly in DRA. Here we assume that there are some greedy nodes, or
equivalently the total input traffic rate from the network layer is larger than the maximum
throughput of the WLAN. Otherwise, it is only a trivial case where DRA can accommodate
all the traffic. Then, we discuss how the design parameters are selected.
5.4.1 Convergence of Multiplicative-Increase Phase
Here by convergence, we mean that the currently existing traffic load expressed in
the channel busyness ratio reaches or exceedsBM , i.e., br > BM . Accordingly, the
103
multiplicative-increase phase ends and the network goes into the AIMD phases. Next,
we show how long it takes for DRA to enter the AIMD phases when a greedy node joins
the network and finds the channel is underloaded, i.e.,br < BM . There are two cases: case
1) there is no traffic in the network yet; case 2) there is some traffic already in the network.
Case 1:
Let tstart denote the time when the application layer sends the first packet to the trans-
mit queue. After the node finishes the first packet transmission, it obtains the channel
busyness ratiobr from the MAC layer during the period starting fromtstart, i.e.,
br =tp
tnow − tstart
(5.15)
DRA setstlast MAC callback time aststart since there is no previous transmission at this node,
i.e.,
tlast MAC callback time = tstart (5.16)
From Equation (5.10),
rint =tp
tnow − tlast MAC callback time
=tp
tnow − tstart
= br (5.17)
and from Equation (5.11),
rnew1 = rint × brth
br= brth (5.18)
wherernewi(i > 1) is the updated resource allocation after theith packet transmission.
We see that this is a very aggressive increase phase. The first node can achieve the highest
allowable capacity after the first packet transmission.
Case 2:
Theorem 1 Assume the existing traffic is not greedy and consumes an amount of resource
rb (0 < rb < BM ), then after transmitting at most
n∗ = dlog rbbrth
(brth −BM)rstart
(BM − rb)(brth − rb − rstart)e (5.19)
104
packets, the network will enter the additive-increase phase. The corresponding time is
∆∗MI =
tp1− rb
brth
[n∗ − 1
brth
+1− rb
brth
rstart
−rb− ( rb
brth)n∗−1
brth(brth − rb)] (5.20)
Proof:
Clearly, before the new node starts transmitting,br = rb. Then after the first packet
transmission at this node,
br =tp
tnow − tstart
+ rb = rint + rb (5.21)
rnew1 = rint × brth
br=
rint
rint + rb
brth (5.22)
Similarly,
rnewn =rnewn−1
rnewn−1 + rb
brth (5.23)
1
rnewn
=1
brth
(rb
rnewn−1
+ 1) =rnb
brnthrnew0
+1− ( rb
brth)n
brth − rb
(5.24)
wherernew0 is equal torint, the instantaneous calculated resource consumed by the first
packet transmission. Whenrnewn + rb > BM , i.e., br > BM , the network enters the
additive-increase phase. To calculate the smallestn necessary for the network to enter the
additive-increase phase, we solve for the smallestn such that
rnb
brnthrnew0
+1− ( rb
brth)n
brth − rb
6 1
BM − rb
(5.25)
Denote suchn by n′. Then, we get the corresponding time∆′MI as
∆′MI =
n′−1∑i=0
tprnewi
=tp
1− rb
brth
(n′ − 1
brth
+1
rnew0
− rb/brth
rnewn′−1
) (5.26)
Apparently,n′ and∆′MI are both decreasing functions ofrnew0. From Section5.3.1, we
know thatrnew0 > rstart, then we can obtain their respective upper-bounds, denoted byn∗
105
0 0.2 0.4 0.6 0.80
10
20
30
rbCon
verg
ence
ste
ps
0 0.2 0.4 0.6 0.80
2
4
rbCon
verg
ence
Tim
e (s
)
rstart = brth /150
rstart = brth /50
rstart = brth /250
n*
∆MI*
Figure 5–2: Convergence speed of multiplicative-increase phase (packet length =1000bytes, channel rate = 11Mbps)
and∆∗MI by replacingrnew0 with rstart. Let equality hold in Equation (5.25), we get
n′ 6 n∗ = dlog rbbrth
(brth −BM)rstart
(BM − rb)(brth − rb − rstart)e (5.27)
∆′MI 6 ∆∗
MI =tp
1− rb
brth
(n∗ − 1
brth
+1
rstart
− rb/brth
rnewn∗−1
) (5.28)
where functiondxe is the smallest integer that is greater than or equal tox. Theorem 1 fol-
lows from Equations (5.24), (5.27) and (5.28). ¤
For an 11Mbps WLAN without using the RTS/CTS mechanism, where packets are
1000 bytes long andbrth = 0.93, BM = 0.95brth, if 0 6 rb < BM − rstart, we shown∗
and∆∗MI in Fig. 5–2. It can be seen that the network will enter the additive-increase phase
after the new node finishes transmissions for 1 to 31 packets, or equivalently0.2 ∼ 4.2s.
The smallerrstart is, the larger∆∗MI is.
5.4.2 Convergence to Fairness Equilibrium
In this subsection, we will show how long it takes for DRA to converge to fairness
equilibrium. By fairness equilibrium, we mean all nodes obtain a fair share of the residual
channel resource left by real-time traffic.
106
Theorem 2 Let N > 1 denote the total number of greedy nodes in the network andrb
denote the resource occupied by the existing non-greedy traffic. Assume the network orig-
inally has a fair resource allocation among theN nodes, then after adding a new greedy
node, it will getα times of the fair share
r∗ =brth − rb
N + 1(5.29)
after
n∗ = logγ
(α− 1)r∗
rstart − r∗(5.30)
AIMD periods. And the corresponding time is
∆∗AIMD = n∗
(1− γ)(brth − rb)
δ(N + 1). (5.31)
Proof:
Since before a new node joins, the network is in the dynamical equilibrium, i.e., each
node gets a fair share of the resource, denoted byr, we know∑N
i=1 r = Nr varies between
γ(brth − rb) and(brth − rb).
Although Nr can be any value betweenγbrth andbrth, we will show later that the
initial value of Nr does not matter in the analysis. Therefore, we assumeNr + rb +
rstart = brth when a new node with the initially allocated resourcerstart joins the network.
According to Equation (5.13), then the new node’s resource is changed tornew0 = γrstart,
and each of theN nodes’ allocated resources is changed to a new valuer0 = γr. The total
used resource becomesγ(brth − rb) + rb, and the total available resourceRa is
Ra = (1− γ)(brth − rb) (5.32)
Then each node will increase its resource according to Equation (5.12).
We denote byincrease-decrease periodthe period from the time when nodes begin
to increase their resource to the time when the allocated resource is decreased by the ratio
107
γ. For the new node, denote byrnew(i, j) its allocated resource after it transmits thejth
packet in theith (i = 0 initially) increase-decrease period. Similarly, denote byrk(i, j) the
allocated resource of nodek (1 6 k 6 N) after it transmits thejth packet in theith period.
We have
rnew(0, 0) = rnew0 = γrstart (5.33)
rk(0, 0) = r0 = γr (5.34)
rnew(i, j) = rnew(i, j − 1) + tpnew
rnew(i,j−1)δ
rk(i, j) = rk(i, j − 1) +tpk
rk(i,j−1)δ
(5.35)
wheretpnew and tpkare the new node’s and nodek’s channel transmission time of one
packet, respectively. By Equation (5.35), we obtain
rnew(i, n) = rnew(i, 0) + δ∑n−1
j=0tpnew
rnew(i,j)
rk(i, n) = rk(i, 0) + δ∑n−1
j=0
tpk
rk(i,j)
(5.36)
Meanwhile, according to Equation (5.8), we know tpnew
rnew(i,j)and
tpk
rk(i,j)are the(j +1)th
packets’ scheduled intervals for the new node and nodek in the ith period, respectively.
Consequently,∑n−1
j=0tpnew
rnew(i,j)and
∑n−1j=0
tpk
rk(i,j)are the time that the new node and nodek
used for transmitting the firstj packets during theith period, respectively. Denote byTi
the length of theith period, then we obtain
rnew(i + 1, 0) ≈ γ[rnew(i, 0) + δTi]
rk(i + 1, 0) ≈ γ[rk(i, 0) + δTi](5.37)
for all k although each node may transmit a different number of packets in theith period
due to different packet length or channel bit rate. At the end of theith period, the total
increased amount should be equal to the available resourceRa, i.e.,
δTi × (N + 1) = (1− γ)(brth − rb) (5.38)
108
which implies that the value ofTi is the same for alli. Therefore, Equation (5.37) leads to
rnew(n, 0) = γnrnew(0, 0) + γδTi1−γn
1−γ
rk(n, 0) = γnrk(0, 0) + γδTi1−γn
1−γ
(5.39)
When the network reaches the equilibrium,
r∗new = r∗new(n + 1, 0) = r∗new(n, 0) (5.40)
r∗k = r∗k(n + 1, 0) = r∗k(n, 0) (5.41)
r∗new = r∗k =γ(brth − rb)
N + 1(5.42)
Then all nodes including the new node will have the same allocated resource which dy-
namically changes betweenγ(brth−rb)N+1
and brth−rb
N+1.
Next, let us derive the convergence speed. We conclude that the nodes converge to
fairness if the following condition is met:
rnew(n, 0) = α× r∗new = α× γ(brth − rb)
N + 1, (5.43)
whereα is a real number close to 1, andα > 1 if rnew0 > r0, α < 1 if rnew0 < r0. If
rnew0 = r0, then the fairness is achieved at the beginning when the new node joins the
network andα = 1. By Equations (5.38), (5.39) and (5.43), we obtain
n∗ = logγ
(α− 1)r∗new
γrstart − r∗new
(5.44)
And the time∆∗AIMD needed for fairness convergence is
∆∗AIMD = n∗ × Ti = n∗
(1− γ)(brth − rb)
δ(N + 1)(5.45)
Finally, we note ifNr + rb + rstart 6= brth when the new node joins the network, after
at most one increase-decrease period, this condition will be met. ¤
We illustraten∗ and∆∗AIMD in Fig. 5–3, whererb = 0, γ = 0.95, brth = 0.93, and
δ = 0.5. We setα = 0.95 for rstart < brth/N , α = 1 for rstart = brth/N andα = 1.05 for
109
0 50 100 150 200 250 3000
50
100
Number of nodes N
Con
verg
ence
ste
ps
0 50 100 150 200 250 3000
1
2
3
Number of nodes NCon
verg
ence
Tim
e (s
)
rstart = brth /150
rstart = brth /50
rstart = brth /250
n*
∆AIMD*
Figure 5–3:Convergence speed of AIMD phases whenδ = 0.5
rstart > brth/N . Fig. 5–3shows the convergence time is very small. And intuitively, if the
initial resourcerstart is around the fair share, the convergence time is almost equal to zero.
5.4.3 Discussion
The above convergence study considers the case when one node joins the network.
Now let us discuss the case when one node leaves the network. At that point of time,
denote byrtotal the total used resource by the remainingN greedy nodes. Ifrtotal is less
thanBM , DRA enters the multiplicative-increase phase; otherwise, it stays in the AIMD
phases. Following the same analysis in the above study, we have the following conclusions
3 . If DRA is in the multiplicative-increase phase, the convergence time of multiplicative-
increase phase can be approximately given by replacingrstart with rtotal in Theorem1.
Once DRA is in the AIMD phases, nodek’s (1 6 k 6 N ) resourcerk converges to the
new fair sharer∗ = brth−rb
Naftern∗ = logγ
(α−1)r∗rk−r∗ AIMD periods. And the corresponding
time is∆∗AIMD = n∗ (1−γ)(brth−rb)
δN.
3 Due to space limitation, detailed analysis is omitted.
110
5.4.4 Parameter Selection
This subsection discusses the selection of parametersγ, δ, andrstart in order to achieve
good performance.
For high efficiency,γ should be close to 1 since the utilized resource dynamically
changes betweenbrth andγbrth = BM . We setγ = 0.95 in both the analysis and simulation
studies. It can be seen from Fig.5–3thatγ = 0.95 can lead to a short convergence time.
From Theorem2, we see that the fairness convergence time∆∗AIMD is inversely pro-
portional toδ. To reduce∆∗AIMD, a largeδ is desirable. On the other hand, a smallδ is
preferred to reduce the degree of oscillation in each node’s allocated resource during the
AIMD phases. Furthermore, we find in the extensive simulations that theinstantaneous
rate updateandbackoffprocedures in DRA effectively accelerate the fairness convergence.
Once a new node joins the network and begins the transmission at the MAC layer, other
nodes’ transmissions are slowed down due to an observed busier channel. These proce-
dures also help dampen the oscillation on the resource adjustment in the additive-increase
phase since these procedures always reflect the instantaneously achievable rate. Taking all
those factors into account, we setδ = 0.05. The simulation studies given below confirm it
is a good choice.
rstart affects the convergence speed of both efficiency and fairness. Whenrstart gets
larger, the network achieves high efficiency by passing themultiplicative-increase phase
more quickly. Moreover, a too largerstart is not appropriate for the case where there are
already many nodes in the network. Thus we setrstart as a fixed valuebrth
150in the simulation
studies which constantly exhibit good performance.
5.5 Performance Evaluation
5.5.1 Simulation Setup
We use ns2.27 as the simulation tool to conduct the performance evaluation of DRA
and 802.11 in the WLAN. Unless otherwise indicated, the channel bit rate is 11Mbps, each
node has a saturated CBR traffic and each simulation run lasts 300 seconds.
111
0.8 0.85 0.9 0.95 12.5
3
3.5
4
4.5Throughput(Mbps) w/ RTS/CTS
brth (L=1000)
0.8 0.85 0.9 0.95 10
200
400
600
800MAC delay(ms) w/ RTS/CTS
brth (L=1000)
0.8 0.85 0.9 0.95 10
2
4
6
8
10Throughput(Mbps) w/ RTS/CTS
brth (n=50)
0.8 0.85 0.9 0.95 10
50
100
150MAC delay(ms) w/ RTS/CTS
brth (n=50)
0.8 0.85 0.9 0.95 10
0.2
0.4
0.6
0.8
1 Collision probability w/ RTS/CTS
brth (L=1000)0.8 0.85 0.9 0.95 10
0.2
0.4
0.6
0.8
1Collision probability w/ RTS/CTS
brth (n=50)
L=200L=500L=1000L=2000L=5000
n=2n=10n=50n=100n=300
L=200L=500L=1000L=2000L=5000
n=2n=10n=50n=100n=300
n=2n=10n=50n=100n=300
Figure 5–4:Impact of payload sizeL and number of nodesn on the optimal threshold forchannel busyness ratiobrth
5.5.2 Channel Busyness Ratio Threshold
This subsection illustrates that using a single channel busyness ratio thresholdbrth to
conduct resource allocation in DRA is robust against different packet lengths and different
numbers of nodes.
Fig. 5–4 shows that with packet lengthL = 1000 bytes, when RTS/CTS is used,
the throughput of DRA is maximized and almost insensitive to the number of nodes if
112
brth = 0.95. When RTS/CTS is not used, the same is true ifbrth = 0.93. The corre-
sponding figures are omitted due to similarity. To maximize the throughput,brth should be
larger when the packet length increases and smaller when the packet length decreases. The
optimal range forbrth falls into0.92 ∼ 0.98.
MAC delay in DRA is less than 20 ms in almost all the cases whenbrth is less than or
equal to 0.95. Also, the collision probability is as low as0% ∼ 20% if brth is properly set.
To avoid estimating the average packet length transmitted in the WLANs, we opt for
a single channel busyness ratio. We setbrth as0.95 and0.93 respectively for the case with
RTS/CTS and the case without RTS/CTS in all the following simulation studies. If both
cases coexist in the network, we recommend thatbrth is set as 0.93 to guarantee a short
delay and a small collision probability in all cases.
5.5.3 Fairness
This subsection illustrates that DRA can quickly converge to high efficiency as well
as good fairness.
In the 500 seconds simulation time, a new node joins the network every 10 seconds.
We observe the instantaneous throughput of each individual node. Fig.5–5(a)shows that
each node can obtain a fair resource allocation within 0 to 4 seconds after it joins the
WLAN. As shown in Fig.5–5(b), the 802.11 only performs well in fairness convergence
speed when the number of nodes is very small, i.e., less than 10. The 802.11 is quite unfair
among more than 10 nodes during short periods, where the instantaneous throughput of
each node oscillates in a large range. We also observe that the aggregate throughput of the
802.11 drops while DRA maintains high and almost unchanged throughput as the number
of nodes increases.
DRA also supports max-min fairness (refer to Section5.3.2) in that it allows all the
traffic from those nodes whose traffic rate is less than the fair share of the channel resource
to get through. In the simulation study, there are 10 groups with 5 nodes each. The nodes
113
0 50 100 150 200 250 300 350 400 450 5000
1
2
3
4
5
Thr
ough
put
(Mbp
s)
Time (s)
300 305 3100
0.2
0.4
Thr
ough
put
(Mbp
s)
Time (s)
Aggregate throughput
Individual node's throughput
(a) DRA
0 50 100 150 200 250 300 350 400 450 5000
1
2
3
4
5
Thr
ough
put
(Mbp
s)
Time (s)
300 305 3100
0.2
0.4
Thr
ough
put
(Mbp
s)
Time (s)
Aggregate throughput
Individual node's throughput
(b) 802.11
Figure 5–5:Fairness convergence with RTS/CTS: one greedy node joins the network every10 seconds (packet length = 1000bytes, each point is averaged over 1 second)
of each group have the same traffic rate. The rates for the 10 groups are 0.2, 0.4, 0.8, 1.2,
1.6, 2.0, 2.4, 2.8, 3.2, and 3.6 Mbps respectively.
Fig. 5–6(a)and5–6(b)show that DRA successfully transmits all the packets of those
nodes whose traffic rate is lower than the fair share of the channel resource with and without
RTS/CTS. And we observe0 ∼ 4% packet losses for the the first 15 flows in the 802.11.
These losses are mainly due to the high collision probability in 802.11. Both DRA and the
802.11 will drop packets from those flows with too high data rates. Moreover, in the case
of no RTS/CTS, DRA can support the 5 flows of the 1.2Mbps data rate without dropping
114
0 5 10 15 20 25 30 35 40 45 500
0.05
0.1
0.15
0.2
Node ID
Thr
ough
put
(Mbp
s)
802.11DRA
(a) Without RTS/CTS
0 5 10 15 20 25 30 35 40 45 500
0.02
0.04
0.06
0.08
0.1
0.12
Node ID
Thr
ough
put
(Mbp
s)
802.11DRA
(b) With RTS/CTS
Figure 5–6:Max-min fairnessunder different traffic rates (packet length = 1000bytes)
packets. This is because DRA achieves higher throughput in case of no RTS/CTS than in
case of RTS/CTS, which is also verified in Section5.5.4.
Aggregate throughput is improved by 14.9% from 4.54 to 5.22 Mbps when RTS/CTS
is not used, and by 8.2% from 3.85 to 4.17 Mbps when RTS/CTS is used. Note DRA also
achieves better fairness for those flows with a traffic rate larger than the fair share of the
channel resource.
Fig. 5–7 shows DRA can provide channel time fairness under different channel bit
rates. There are a total of 30 nodes in the network, first 10 nodes using 2Mbps, the second
10 nodes using 5.5Mbps, and the last 10 nodes using 11Mbps. The packets are 1000bytes
long. DRA starts the channel adaptive packet concatenation procedure for the multirate
115
1 5 10 15 20 25 300
1
2
3
4
Node IDFra
ctio
n of
cha
nnel
tim
e (%
)
(a) Channel time share
1 5 10 15 20 25 300
0.1
0.2
0.3
0.4
Node ID
Thr
ough
put
(Mbp
s)
(b) Throughput
Figure 5–7:DRA: fairness with multiple channel bit rates (RTS/CTS is used)
1 5 10 15 20 25 300
1
2
3
4
Node IDFra
ctio
n of
cha
nnel
tim
e (%
)
(a) Channel time share
1 5 10 15 20 25 300
0.02
0.04
0.06
Node IDT
hrou
ghpu
t (M
bps)
(b) Throughput
Figure 5–8:802.11: fairness with multiple channel bit rates (RTS/CTS is used)
WLAN. It concatenates three packets in one transmission for the 5.5Mbps nodes and 6
packets for the 11Mbps nodes; it still transmit one packet at each time for 2Mbps nodes.
In contrast, as the 802.11 is designed to achieve throughput fairness, slow nodes use much
more channel time than fast nodes, as illustrated in Fig.5–8. As a result, the aggregate
throughput is greatly reduced. Compared to the 802.11, DRA improves the aggregate
throughput by 240% from 1.3045 Mbps to 4.4410 Mbps.
5.5.4 Efficiency, Delay and Collision
This subsection demonstrates that DRA can maintain a stable high throughput, a short
delay and a small collision probability which are almost insensitive to the number of ac-
tive nodes. The throughput of the 802.11 WLAN is improved by as high as 71.62% with
RTS/CTS and 157.32% without RTS/CTS. And the MAC delay is maintained less than
30ms while it increases up to more than 2 seconds in 802.11.
Fig. 5–9 (a), (b) and (c) show that throughput performance stays almost the same
as the number of active nodes increases in DRA, whereas it degrades dramatically for
116
802.11. Here the figures for the scenario where RTS/CTS is not used are omitted due
to the similarity. When the number of active nodes is equal to 300, DRA improves the
throughput by 71.62%, 62.48%, 59.22%, 48.96%, 36.63% with RTS/ CTS and 66.42%,
84.01%, 82.23%, 126.86%, 157.32% without RTS/CTS for packet lengths of 200, 500,
1000, 2000, 5000 bytes respectively. Fig.5–9 (d), (e) and (f) show that DRA supports a
very short medium access delay as low as less than 30ms, which is desirable to support
the real-time services. The bad performance of 802.11 is attributed to the high collision
probability, as confirmed in Fig.5–9(g), (h) and (i).
We also compare DRA’s throughput with the theoretical maximum throughput for
the 802.11 DCF, which is calculated byp∗ in Equation (5.4). The results show that the
maximum throughput is larger than DRA by up to 8.4%, 5.52%, 4.31%, 5.04%, 7.81% with
RTS/CTS and 10.16%, 4.74%, 7.35%, 8.16%, 12.22% without RTS/CTS for the packet
lengths of 200, 500, 1000, 2000, 5000 bytes respectively. The difference is partly due to
the choice of a single channel busyness ratio as shown in Fig.5–4and is partly due to the
requirement of fairness adjustment and short MAC delay as well as the difficulty to keep
channel busyness ratio equal to the optimum valuebr∗.
5.5.5 Quality of Service
This subsection illustrates that DRA can support statistical QoS guarantee for real-
time traffic while 802.11 alone performs poorly in this regard. Notice that, as defined in
the standard [73], the tolerable packet loss rate is1 ∼ 3% and the one-way transmission
delay is preferably shorter than 150ms but should be no larger than 400ms for real-time
services.
In this simulation, there are 100 nodes. 50 nodes have greedy traffic. From second
0 on, every 60 seconds one new node starts a video flow; from second 30 on, every 60
seconds one new node starts a voice flow. All these nodes randomly choose a destination.
We use a CBR model for the video flows. The rate is set as 64kb/s with a packet
size of 1000bytes. Voice traffic is modeled as on/off traffic. Theon andoff periods are
117
0 100 200 3000.6
0.8
1
1.2
1.4
Number of nodes
Thr
ough
put
(Mbp
s)DRA802.11maximum
(a) L=200bytes
0 100 200 3002.5
3
3.5
4
4.5
5
Number of nodes
Thr
ough
put
(Mbp
s)
DRA802.11maximum
(b) L=1000bytes
0 100 200 3005
6
7
8
9
Number of nodes
DRA802.11maximumT
hrou
ghpu
t (M
bps)
(c) L=5000bytes
0 100 200 3000
200
400
600
Number of nodes
MA
C D
elay
(m
s) DRA802.11
(d) L=200bytes
0 100 200 3000
200
400
600
800
Number of nodes
MA
C D
elay
(m
s)
DRA802.11
(e) L=1000bytes
0 100 200 3000
500
1000
1500
2000
Number of nodes
MA
C d
elay
(m
s) DRA802.11
(f) L=5000bytes
0 100 200 3000
0.2
0.4
0.6
0.8
1
Number of nodes
Col
lisio
n pr
obab
ility
DRA802.11
(g) L=200bytes
0 100 200 3000
0.2
0.4
0.6
0.8
1
Number of nodes
Col
lisio
n pr
obab
ility DRA
802.11
(h) L=1000bytes
0 100 200 3000
0.2
0.4
0.6
0.8
1
Number of ndoesC
ollis
ion
prob
abili
ty
DRA802.11
(i) L=5000bytes
Figure 5–9:Throughput, MAC delay and collision probability with RTS/CTS
exponentially distributed with an average value of 300 ms each. During theon periods,
traffic is generated at a rate of 32 kb/s with a packet size of 160 bytes. The simulation
results are shown in Fig.5–10and5–11, where for throughput each point is the averaged
value over one second and for delay each point represents one packet.
Fig. 5–10(a)shows DRA can provide a constant bit rate for CBR video traffic while
Fig. 5–11(a)shows the 802.11 fails to do so. There are no real-time packet drops observed
in DRA while a number of real-time packets are dropped due to MAC collisions as well as
queue overflows in 802.11. The delivery ratios of all the real-time flows are 100% in DRA,
and vary from 69.3% to 97.9% for video flows and from 40.5% to 66.8% for voice flows
in the 802.11.
118
0 100 200 300 400 500 6000
1
2
3
4
5
Time (s)
Thr
ough
put
(Mbp
s)Greedy traffic
Video traffic Voice traffic
Aggregate throughput
(a) Throughput
(b) Voice delay
(c) Video delay
Figure 5–10:QoS performance inDRA
0 100 200 300 400 500 6000
1
2
3
4
5
Time (s)
Thr
ough
put
(Mbp
s)
Greedy traffic Video traffic Voice traffic
Aggregate throughput
(a) Throughput
(b) Voice delay
(c) Video delay
Figure 5–11:QoS performance in802.11
In DRA, only several video packets have a delay larger than the 400ms limit. There
are about 50 voice packets that have delay larger than 400ms. If we look at the delay
performance of each voice flow, such as the first voice flow which starts at 30 second, there
are a total of 16 packets that violate the delay requirement. They happen in 4 bursts each
with 3 to 5 packets. It means that, during the total 600 seconds period, there are 4 short
periods which are about 400ms∼600ms long each with 3 to 5 packets violating the delay
requirement. This is acceptable for most of the current VoIP users. In DRA, the mean and
standard deviation are respectively 14ms and 29.8ms for voice packets, 12ms and 17.9ms
119
for video packets. 99% of voice packets have delay below 115.2ms, and it is 74.2ms for
video packets.
5.6 Related Work and Discussions
As mentioned earlier, there are three threads of research, namely efficiency, fairness
and QoS support in WLANs, that are related to our work.
There have been many research works focusing on reducing collisions and increas-
ing throughput in WLANs. MACA [82] and MACAW [14] used the BEB and RTS/CTS
mechanisms which were adopted by 802.11. In the papers [16, 21], an adaptive contention
window was proposed to replace the BEB mechanism. Both schemes rely on the estima-
tion of the number of active nodes in the network. The latter also needs to estimate the
length of transmitted packets. However, as shown in the paper [17], to obtain an accu-
rate and timely estimate of the number of active nodes is not easy. There are also some
other schemes which do not need such estimations; however, they substantially change the
IEEE 802.11 standard. For instance, FCR [90] uses a new fast collision resolution backoff
scheme to replace the BEB. Recently, Kim and Hou [85] proposed MFS that reduces the
collisions by scheduling a delay before a node attempts transmission. Despite the signifi-
cant improvement over 802.11, the achieved throughput drops significantly as the number
of nodes increases. Also, MFS requires run-time estimations of the number of active nodes.
In this work, without changing the BEB or inducing the estimation of the number of nodes,
we achieve high throughput by accurately controlling the total traffic rate in light of the
observed channel busyness ratio. More notably, the throughput is rather steady even in the
presence of a large number of nodes.
Meanwhile, some studies sought to improve throughput by exploiting wireless channel
variations [80, 65, 113, 76]. The underlying idea is to increase transmission rate and/or
transmit more packets when the channel is good. However, this idea does not necessarily
reduce the collision probability for each transmission attempts in the presence of many
concurrent users. So the effect of collision on efficiency is not necessarily alleviated. It
120
can be seen that these schemes can be integrated into our DRA framework to deliver better
performance.
Several representative schemes have studied the fairness issue in WLANs [123, 110,
141, 121, 8]. Pilosof et al. [110] discovered the unfavorable effect of the buffer size
at the AP on downlink TCP flows and proposed to reset the receiver window of all the
TCP flows at the AP. Realizing that the throughput fairness model may lead to aggregate
throughput degradation in multi-rate WLANs, Tan and Guttag [121] proposed to use a
time-based regulator at the AP to achieve time-based fairness and hence improve through-
put. However, like [110], it can only work in infrastructure WLANs. By modifying the
BEB [123], Vaidya et al. proposed a distributed fair scheduling scheme that can imitate
Self-Clocked Fair Queueing (SCFQ), a centralized fair queueing algorithm. SCORE was
used to achieve proportional differentiation [141]. Again, the BEB process was replaced
by an adaptive inter-transmission spacing control. Moreover this scheme and [123] were
designed for throughput fairness. Recently, Bejerano et al. [8] studied the network wide
resource allocation problem through intelligent association of users to APs. However, re-
source allocation among mobile users under one AP was not studied. In contrast, DRA
is a fully distributed scheme that, without changing the BEB, achieves the desirable time
fairness.
Along the line of QoS support in WLANs, most works focused on service differen-
tiation, such as some representative schemes [1, 125, 137]. Ada and Castelluccia [1] pro-
posed to use different interframe spaces, contention windows or maximum frame lengths
for different priorities. Veres et al. [125] proposed two mechanisms, i.e., virtual MAC and
virtual source, to provide differentiated services. To enhance the emerging 802.11e stan-
dard ([31, 72]) in QoS support, Xiao et al. [137] adopted a two-level mechanism to protect
the real-time traffic. Sobrinho and Krishnakumar [119] proposed the Blackburst scheme to
minimize delay for real-time traffic. Unfortunately, stations transmitting real-time traffic
are required to have the channel-jamming capability. In the paper [114], the transmission
121
period is split for real-time and non-real-time traffic, thereby enabling QoS guarantee for
real-time traffic. However, the DCF mode was dramatically changed. To sum up, if the
semantics of the 802.11 DCF is maintained, all the works mentioned above can only sup-
port service differentiation. This work, however, can provide statistical QoS guarantee for
real-time traffic without modifying 802.11.
From the above discussions on the related work, we clearly see that there has been
no systematic study that addresses efficiency, fairness, and QoS simultaneously on 802.11.
To the best of our knowledge, our work is the first one along this line. For this reason,
we mainly focus on evaluating the performance of DRA rather than compare it with other
schemes that only addressed one of these issues. Further, since 802.11 has already been
widely implemented in commercial products, our scheme that runs on top of it is more
attractive than those that directly alter 802.11’s semantics to a less or more extent. Here,
we highlight the following advantages of DRA over those schemes. Unlike the previous
schemes that were aimed at improving efficiency, DRA still yields high throughput even in
the presence of a larger number of users. Compared with the schemes targeting fairness,
DRA achieves good time fairness. In addition, DRA greatly improves the short term fair-
ness and the max-min fairness (refer to Section5.3.2) especially when the number of users
is larger than 10. Instead of only supporting service differentiation, as is the case with most
previous schemes that maintain the 802.11 DCF semantics, DRA provides statistical QoS
guarantee for real-time service.
5.7 Conclusions
As 802.11 based wireless LANs have been enjoying increasing popularity, several
issues are crucial in supporting a large number of users with a portfolio of applications
such as VoIP, streaming multimedia, and web browsing. Among those are efficiency, fair-
ness and QoS. Although each has been extensively researched, the overall progress, while
significant, is not completely satisfactory due to the lack of a unified solution backed by
considerations of these three issues simultaneously.
122
Motivated by this observation, we conducted a comprehensive study and devised
DRA, a distributed resource allocation scheme that is perhaps the first general framework
aimed to address high efficiency, time fairness, and QoS support for real-time services at
the same time.
DRA utilizes the channel busyness ratio to characterize the network status. Based
on this information, DRA develops a novel three-phase control mechanism, namely the
multiplicative-increase, additive-increase and multiplicative-decrease phases, to enable the
network to converge to high throughput and time fairness, which is proven by theoretical
analysis. QoS for real-time services is achived by conducting call admission control over
real-time services and properly adjusting the network resource among real-time and non-
real-time services.
Extensive simulations demonstrate that DRA maintains high efficiency, good time
fairness, short medium access delay and zero packet loss rate for real-time traffic. Com-
pared to 802.11, it improves throughput by as high as 71% with RTS/CTS and 157% with-
out RTS/CTS. Time fairness is achieved for single-rate and multi-rate WLANs. Moreover,
real-time traffic such as VoIP or streaming video is supported with statistical guarantee.
CHAPTER 6PHYSICAL CARRIER SENSING AND SPATIAL REUSE IN MULTIRATE AND
MULTIHOP WIRELESS AD HOC NETWORKS
Physical carrier sensing is an effective mechanism of medium access control (MAC)
protocols to reduce collisions in wireless networks, and the size of the carrier sensing range
has a great impact on the system performance. Previous studies have shown that the MAC
layer overhead plays an important role in determining the optimal carrier sensing range.
However, variable transmission ranges and receiver sensitivities for different channel rates
and the impact of multihop forwarding have been ignored. In this chapter, we investigate
the impacts of these factors as well as several other important factors, such as SINR (signal
to interference plus noise ratio), node topology, hidden/exposed terminal problems and
bidirectional handshakes, on determining the optimum carrier sensing range to maximize
the throughput through both analysis and simulations. The results show that if any one
of these factors is not addressed properly, the system performance may suffer a significant
degradation. Furthermore, considering both multirate capability and carrier sensing ranges,
we propose to use bandwidth distance product as a routing metric, which improves end-to-
end throughput by up to 27% in the simulated scenario.
6.1 Introduction
Wireless ad hoc networks have wide applications in many situations wherever wireless
communication and networking are preferred for convenience and/or low cost, such as
wireless mesh networks and sensor networks. In such networks, medium access control
(MAC) protocol plays a key role to coordinate the users’ access to the shared medium.
The IEEE 802.11 [68] protocol is a kind of CSMA/CA (carrier sense multiple access with
collision avoidance) MAC protocols and it has been the standard of the wireless LANs.
The 802.11 DCF (distributed coordination function) protocol has been also widely studied
123
124
in the wireless multihop ad hoc networks due to its simple implementation and distributed
nature.
Carrier sensing is a fundamental mechanism in CSMA/CA protocols. Each user
senses the channel before a transmission and defers the transmission if it senses a busy
channel to reduce the collision. This mechanism consists of physical carrier sensing and
virtual carrier sensing. In the physical carrier sensing, the channel is determined busy if
the sensed signal power is larger than a carrier sensing thresholdCSth or idle otherwise. In
the virtual carrier sensing, each user regards the channel busy during the period indicated
in the MAC header of the MAC frames, such as RTS (ready to send), CTS (clear to send),
DATA, and ACK (acknowledgement) defined in the IEEE 802.11 protocol.
The virtual carrier sensing mechanism can only notify the nodes in the transmission
range of the occupied medium, in which a transmission can be decoded correctly if the in-
terference level is small enough. Transmissions outside of this range can introduce enough
interference to corrupt the reception in many cases. In addition, some ongoing transmis-
sions may not be decoded correctly due to other transmissions nearby, resulting in the
failure of the virtual carrier sensing. Hence virtual carrier sensing cannot rule out colli-
sions from inside of the transmission range and is incapable of avoiding collisions from
outside of the transmission range.
Physical carrier sensing range, in which a transmission is heard but may not be de-
coded correctly, can be much larger than the transmission range and hence it can be more
effective than the virtual carrier sensing in avoiding the interference especially in the mul-
tihop networks. However, large carrier sensing range reduces spatial reuse and affects the
aggregate throughput because any potential transmitters, which sense a busy channel, are
required to keep silent. Therefore, the optimum carrier sensing range should balance the
spatial reuse and the impact of collisions in order to optimize the system performance.
The IEEE 802.11 a/b/g protocols provide multiple channel rates in wireless multihop
ad hoc networks. Different channel rates have different transmission ranges, requirements
125
of SINR (signal to interference plus noise ratio) and receiver sensitivity. Does each rate
require a different optimum carrier sensing threshold? How can we set the carrier sensing
threshold when multiple rates coexist? Furthermore, multiple forwardings are common for
multihop flows and may force a significant change of the optimum carrier sensing thresh-
old from the case when only one-hop flows are considered. Higher channel rates result
in shorter transmission delay but also have shorter ranges. We must be careful to select
the appropriate channel rate to maximize the system performance in terms of end-to-end
delay/throughput and power consumption, which are all important performance metrics for
multihop flows. To optimize the end-to-end performance of multihop flows, carrier sensing
range and spatial reuse as well as hop distance must be appropriately addressed.
The default setting of the physical carrier sensing threshold and the carrier sensing
strategy in the widely used network simulation tools ns2 and OPNET are not optimum in
most cases. The excessive collisions result in false link/route failures followed by rerout-
ing and unnecessary end-to-end retransmissions of TCP packets. Poor performance at the
MAC layer as well as at the higher layers has been reported in many literatures especially
for multihop flows in wireless ad hoc networks ([91, 28, 29, 151, 152, 162, 147]). Fur-
thermore, these simulation tools have not considered the variable requirements of carrier
sensing ranges and transmission ranges when multiple channel rates of the IEEE 802.11
protocols are used, hence the simulation studies may not reflect the performance of real
products.
Many papers have already noticed the impact of carrier sensing and spatial reuse on
the system performance. Xu et al. [138] indicate that virtual carrier sensing via RTS/CTS
is far from enough to solve the interference and larger physical carrier sensing range can
help in some degree. In the papers [55, 61, 54], co-channel interference is analyzed to
derive the spatial reuse and the capacity of wireless networks wherein a minimum SINR is
necessary for successful communication. Gobriel et al. [52] construct a collision model to-
gether with an interference model of a uniformly distributed network to derive the optimum
126
transmission power that yields maximum throughput and minimum energy consumption
per message. Li et al. [93] identify several unfairness problems due to the EIFS duration
required by the carrier sensing mechanism and propose to use variable EIFS duration.
Recently, several work have also attempted to identify the optimum carrier sensing
range. Deng et al. [37] illustrate the impact of physical carrier sensing range on the ag-
gregate throughput of one-hop flows and propose a reward formulation to characterize the
trade-off between the spatial reuse and packet collisions. Zhu et al. [167] have attempted
to identify the optimal carrier sensing threshold that maximizes the spatial reuse for a reg-
ular topology. Yang and Vaidya [142] show that MAC layer overheads have a great impact
on the choice of carrier sensing range. However, the interactions between carrier sensing
range and variable transmission ranges for different channel rates, as well as their impact
on the network performance, have not been identified by prior research, and the impact of
multihop forwarding on the carrier sensing range have not been addressed either. There are
also several other important factors needed to be further studied to determine an optimum
carrier sensing range, such as variable requirements of SINR and receiver sensitivities for
different channel rates, bidirectional handshakes, tradeoff between spatial reuse and colli-
sions, node density and network topology, and the impact on higher layers’ performance.
In this chapter, we use both analyses and simulations to illustrate the relationships between
all these factors and the system performance. We demonstrate that if any of these factors
is not considered properly in determining the optimal carrier sensing range, the system
performance can suffer a significant loss.
The rest of this chapter is organized as follows. Section6.2 studies the optimum
carrier sensing range subject to various factors and its impact on the aggregate one-hop
throughput. Based on the results in Section6.2, we illustrate in Section6.3how to set the
carrier sensing threshold in a multirate ad hoc network and how it affects the end-to-end
throughput, delay and energy consumption of multihop flows. In Section6.4, we introduce
127
several important ns2 extensions and conduct simulation studies to verify the analytical
results. Finally, Section6.5concludes this chapter.
6.2 Optimum Carrier Sensing Range
In this section, we first derive the optimum carrier sensing range at the worst case
scenario where there exists the most severe interference. Both the Shanon capacity and the
802.11’s discrete channel rates are considered in the analytical studies. Then we discuss
the tradeoff between the hidden terminal problem and the exposed terminal problem in
maximizing the aggregate throughput and the impact of random topology and bidirectional
handshakes.
6.2.1 Aggregate Throughput and SINR at the Worst Case
We first introduce several notations before we discuss the optimum carrier sensing
range.RXth denotes the smallest power level of the received signal required for correctly
decoding at the receiver. It determines the transmission range and the corresponding max-
imum transmission distancedt. CSth denotes the carrier sensing threshold, and a node
senses an idle channel if the sensed power level is less thanCSth or a busy channel other-
wise. It determines the maximum sensing distancedc. X represents the relative size of the
carrier sensing range compared to the transmission range and
X =dc
dt
(6.1)
It can be shown that the maximum interference level is achieved when six other nodes
are transmitting simultaneously at the boundary of the carrier sensing range of each trans-
mitter as shown in Fig.6–1 given that any two transmitters must bedc away from each
other. Similar to the cellular networks scenario, these 6 nodes are the first tier interference
nodes. Since any other interference nodes are far away and contribute much smaller inter-
ference than the first tier interference nodes, we ignore them when calculating the SINR.
To facilitate the calculation, we also show the two-dimensional coordinates of the nodes in
Fig. 6–1(b), whereα denotes the included angle betweenN0D0 andN0N1.
128
N3 N2
N1
N6N5
N4 N0
D0 D1
D2
D3
D4
D5D6
N3 N2
N6N5
N4 N0
D0
( , 0)dc−
( , 0)dc
( 2 , 3 2)d dc c−( 2 , 3 2)d dc c− −
( 2 , 3 2)cd dc− ( 2 , 3 2)d dc c
N1α
( cos , sin )d dt tα α
(0, 0)
(a) (b)
Figure 6–1:Interference model
Let di denote the distance between nodeNi(0 6 i 6 6) andD0 andd0 6 dt. Then the
received powerPi(0 6 i 6 6) at nodeD0 of the signal from nodeNi is equal to
Pi = P0
(d0
di
)γ
(6.2)
whereγ is the path loss exponent and typically2 6 γ 6 5. SINR is equal to
SINR =P0∑6
i=1 Pi + PN
=1
∑6i=1
(d0
di
)γ
+ PN
P0
(6.3)
wherePN is the noise level and normally it is much less than the power level of the closest
interference. It can be shown that whenα is in the range[0, π/6], SINR is an increasing
function ofα whenγ > 2. Since we consider the worst case, SINR should be calculated at
α = 0 andd0 = dt, i.e.,
1SINR
= 1(X−1)γ + 1
(X+1)γ + 2�(X
2−1)
2+�√
3X2
�2� γ
2
+ 2�(X
2+1)
2+�√
3X2
�2� γ
2+ PN
P0
(6.4)
129
Given a requirement of SINR for a coding/modulation scheme and the corresponding
achievable channel raterc(bps),X is determined. The achievable data raterd(bps) equals
rd =Lpl
Tpreamble +LH+Lpl
rc
(6.5)
whereTpreamble in seconds is a preamble of a packet regardless of the channel rate, such
as the physical layer preamble for synchronization purpose at the receiver and the short
interframe spacingSIFS at the MAC layer.LH consists of protocol overheads in bits from
different protocol layers, such as MAC and IP layers, andLpl is the size of the payload in
bits we wish to transmit.
To calculate the maximum aggregate throughput, we need to know the total number
of concurrent transmissions. For a topology with an area ofA and with concurrently trans-
mitting pairs as shown in Fig.6–1, each transmit-receive pair occupies a nonoverlapping
area ofA0 =√
32
d2c by ignoring the border effect. For a general topology,A0 is proportional
to d2c . Thus the total number of allowed concurrent transmissions is
A
A0
∝ 1
d2c
=1
d2t X
2(6.6)
Thus the aggregate throughputS is proportional to
S ∝ rd
d2t X
2=
1
d2t X
2
Lpl
Tpreamble +LH+Lpl
rc
(6.7)
In the following subsections, we will discuss how to selectX to maximize the aggregate
throughput using Shannon Capacity and the 802.11 data rates, respectively.
OnceX is determined,CSth can be set as the power level sensed at distancedc to
guarantee any new concurrent transmission happens at leastdc away. LetTcs denote the
ratio ofRXth to CSth:
Tcs =RXth
CSth
= Xγ (6.8)
130
2 3 4 52.2
2.4
2.6
2.8
3
3.2
γX
2 3 4 5-5
0
5
10
15
γ
SIN
R (
dB)
2 3 4 50
1
2
3
4
5
γ
Cha
nnel
rat
e (b
ps/h
z)
2 3 4 55
10
15
20
25
γ
T cs(d
B)
Figure 6–2:Carrier sensing threshold with Shanon Capacity
6.2.2 Maximum Throughput and Optimum Carrier Sensing Range under ShannonCapacity
Using Shannon Capacity formula, the achievable channel raterc can be obtained given
a certain SINR and a bandwidthW (hz):
rc = W log2(1 + SINR) (6.9)
S ∝ 1
d2t X
2
Lpl
Tpreamble +LH+Lpl
W log2(1+SINR)
(6.10)
Thus
arg maxX
S = arg minX
(TpreambleX
2
LH + Lpl
+X2
W log2 (1 + SINR)
)(6.11)
WhenX is small,log2 (1 + SINR) increases along withX and is faster thanX2. When
X is large,log2 (1 + SINR) increases along withX and is slower thanX2. Thus there is
an optimum value of X to maximizeS. By letting the derivation of Equation (6.11) with
respect toX equal0, the optimum value ofX can be solved given values ofTpreamble, LH
andLpl. The results are shown in Fig.6–2atTpreamble = 0 andLH = 0.
131
Table 6–1:Signal-to-noise ratio and receiver sensitivity
Rates (Mbps) SINR (dB) Receiver sensitivity (dBm)54 24.56 -6548 24.05 -6636 18.80 -7024 17.04 -7418 10.79 -7712 9.03 -799 7.78 -816 6.02 -82
-3 0 10 20 300
10
20
30
SINR (dB)
X
-3 0 10 20 300
10
20
30
40
SINR (dB)
T cs (
dB)
γ = 2γ = 3γ = 4γ = 5
Figure 6–3:Carrier sensing threshold with different SINR
WhenTpreamble > 0 andLH > 0, X andTcs could be smaller. We are not going to
further discuss the impact of these protocol overheads on the carrier sensing range under
Shannon Capacity. However, as we will discuss below, when the discrete data rates of the
standard IEEE 802.11 are considered, the requirement ofSINR, other than these protocol
overheads, plays the major role to determine the optimum carrier sensing range.
6.2.3 Maximum Throughput and Optimum Carrier Sensing Range under the Dis-crete Channel Rates of the IEEE 802.11
For a certain channel raterc, there is a requirement of SINR. For example, Table6–1
shows the requirements of SINR of some products for different channel rates [143]. Given
the SINR, we can derive the value ofX according to Equation (6.4). SmallerX violates the
requirement of SINR and largerX decreases the spatial reuse and the aggregate through-
put. Thus, by Equation (6.7), the protocol overheadsTpreamble andLH do not impact the
optimum value ofX.
132
2 3 4 50
10
20
30
40
γX
2 3 4 510
20
30
40
γ
T cs (
dB)
2 3 4 5-102
-100
-98
-96
-94
γ
CS
thre
sh (
dBm
)
2 3 4 50.8
1
1.2
1.4
γ
Nor
mal
ized
max
imum
ca
rrie
r se
nse
dist
ance
24.5618.8010.79 6.02
SINR (dB)
Figure 6–4:Carrier sensing threshold with discrete channel rates of 802.11
The optimum value ofX andTcs for different SINR are given in Fig.6–3. We also
illustrate in Fig. 6–4 the optimum carrier sensing range for 4 discrete channel rates 6,
18, 36, and 54Mbps with the requirement of SINR in Table6–1. From the figure, we can
observe that the larger the SINR requirement is, the largerX is. X changes in a large range
for different values ofSINR and so doesTcs.
Fortunately, this does not mean that the optimum carrier sensing range changes in
a large range because the transmission range andRXth also change in a large range for
different channel rates. Table6–1shows the requirements of receiver sensitivity of the same
product for different channel rates.RXth should be larger than or equal to the requirement
of receiver sensitivity. This actually represents a common knowledge that higher channel
rates sustains in a smaller range for the 802.11 products. The optimum value of carrier
sensing thresholdCSth can be obtained according to the givenRXth and the corresponding
optimum value ofTcs for a certain channel rate by the following equation,
CSth =RXth
Tcs
(6.12)
133
Let i ∈ {6, 18, 36, 54} denote the index ofdc(i) andCSth(i) at channel rate6, 18,
36, and54Mbps, respectively. We define anormalized maximum carrier sensing distance
as the ratio ofdc(i) to dc(6), representing the relative size of the optimum carrier sensing
range at different channel rates to that at6Mbps, and
dc(i)
dc(6)=
(CSth(i)
CSth(6)
) 1γ
(6.13)
By settingRXth with the value of the receiver sensitivity, we plotCSth and dc(i)dc(6)
in
Fig. 6–4. We can observe that although the optimum value ofX has a big difference at
different channel rates, the carrier sensing threshold and the carrier sensing range does not
have much difference for different channel rates and the difference ranges from 0 to 2 dB.
6.2.4 Impact of Random Topology
The optimum carrier sensing threshold discussed above only considers a special case,
i.e., nodes are always available at the desired location whenever they are needed to make it
possible for the scheduled transmissions in Fig.6–1. However, the situation rarely happens
in practice. First, in a random topology, the possibility that the nodes are located at the
desired places is small. Second, even it happens, the chance is still small for all of them
successfully contend for the channel for concurrent transmissions. Therefore, considering
6 concurrent interference nodes is too conservative to maximize the spatial reuse in random
topology.
Another extreme is to only consider one possibly nearest interference node, like node
N1 to the transmit-receive pairN0 andD0 in Fig. 6–1. The nearest interference distance is
dc − dt, and letX ′, CS ′th andT ′cs denote the correspondingX, CSth andTcs, respectively.
We have
SINR =
(dc − dt
dt
)γ
= (X ′ − 1)γ (6.14)
Given the requirements ofSINR, we can use the same method as that in Section6.2.3to
derive the carrier sensing thresholdCS ′th for different channel rates. We haveX ′ < X,
CS ′th > CSth andT ′cs < Tcs. CS ′th can be 1 to 6 dB higher thanCSth. The corresponding
134
A B
Xdt( 1)X dt−
A B
C
(a) (b)
CD
Figure 6–5:Tradeoff between exposed terminal problem and hidden terminal problem
maximum carrier sensing distance is 50% to 94%, and the area of carrier sensing range is
25% to 89% of the original values depending on the values of path loss exponentγ and
channel raterc. Thus a higher carrier sensing thresholdCSth may greatly increase the
spatial reuse without introducing severe collisions.
6.2.5 Tradeoff between Exposed Terminal Problem and the Hidden Terminals Prob-lem
In the analysis of Section6.2.2and6.2.3, we have considered all the interference at
the worst case to calculateX andCSth. There will be almost no hidden terminal prob-
lems, because all interference nodes which may contribute enough interference to corrupt
the received packets fall in the carrier sensing range and are required to defer their own
transmissions. Notice that the physical carrier sensing does not completely solve the hid-
den terminal problem. For example in Fig.6–5(b), there is an obstruct between node A
and C. C may not be able to sense the transmission of A and hence may initiate a new
transmission, resulting in a collision at node B.
As we have indicated in the previous subsection that this carrier sensing threshold
may be too conservative. Here, we point out another reason that the large carrier sensing
range impacts the spatial reuse. As shown in Fig.6–5(a), node C’s transmission to D
will not introduce enough interference to corrupt B’s reception. However, node C senses
135
A’s transmission and defers its own transmission. This results in poor spatial reuse and is
commonly called the exposed terminal problem.
From the point of view of the intended receiver B in Fig.6–5(a), we call the range as
the interference rangeof B, in which any point has a closer distance to B than(X − 1)dt.
Any single interference node out of this range may not corrupt B’s received packet from A.
To measure the impact of the exposed terminal problem, we define anexposed-area ratio
δ as the ratio of the area of carrier sensing range to that of interference range minus 1 and
δ =π(Xdt)
2
π((X − 1)dt)2− 1 =
(X
X − 1
)2
− 1 (6.15)
It is easy to show thatδ decreases from 3 to 0.05 whenX increases from 2 to 40. From
Fig. 6–4, we know that a smaller channel rate requires a smallerX and hence a larger
exposed-area ratio. And even for the highest channel rate54Mbps allowed in 802.11a/g,
the exposed-area ratio cannot be ignored whenγ > 3 becauseδ = 24% and56% when
X = 10 and5, respectively. Let alone that we have not considered a worse case but a
common situation in a random topology where the distance between the transmitter and
its intended receiver is less than the maximum transmission distancedt. The interference
range should be smaller because the received signal has a greater power. Therefore, the
exposed terminal problem cannot be ignored.
Therefore, to alleviate the exposed terminal problem and increase the spatial reuse, it
is necessary to decrease the carrier sensing range. However, this may expose a part of the
interference range out of the carrier sensing range and results in hidden terminal problem.
The smaller the carrier sensing range, the less the exposed terminal problem and the more
severe the hidden terminal problem. Apparently, there is a tradeoff between the exposed
terminal problem and the hidden terminal problem in order to increase spatial reuse and
alleviate the collisions at the same time.
136
6.2.6 Carrier Sensing Range and Strategies for Bidirectional Handshakes
The interference model discussed in Section6.2focuses on one-way DATA transmis-
sions. However, wireless links are not reliable due to collisions, wireless channel errors
and mobility. MAC layer acknowledgements are necessary to check the link reliability and
in most cases, link layer retransmissions are more efficient than end-to-end retransmissions
for such unreliable links. Besides, the bidirectional handshake has already been adopted
by the IEEE 802.11 MAC protocols which define a two-way DATA/ACK handshake and
a four-way RTS/CTS/DATA/ACK handshake for each data packet transmission. There-
fore, the receivers may also transmit CTS/ACK and the interference at nodeD0 in Fig.
6–1becomes worse, hence the original interference model in Equation (6.3) is not always
effective to avoid the collisions.
When bidirectional handshakes are considered, the following three problems have not
been well addressed in that interference model. The first problem is the packet collision.
The second one is the receiver blocking problem [165, 163]. This problem describes the sit-
uation that the transmitter keeps (re)transmitting RTS or DATA frames when the intended
receiver senses a busy channel due to other ongoing transmissions and does not or can-
not respond with CTS or ACK frames. After the retransmission times exceeds a certain
threshold [68], the transmitter will drop the data packet, declare the link failure and hence
route repair will be executed. We call this receiver as a blocked receiver. The third one is
unfairness resulting from the previous two problems.
These problems are related with the carrier sensing strategies. There are largely two
carrier sensing strategies in the IEEE 802.11. Thestrategy I is to forbid a node from
transmitting if it senses a busy channel. Thestrategy IIis to allow a node to transmit at any
situations even if it senses a busy channel. In the 802.11, the first strategy is adopted for
transmissions of RTS, DATA and CTS frames. The second one is adopted for transmissions
of ACK frames to acknowledge the successfully transmitted DATA frames.
137
We use a simple topology to illustrate these problems, i.e., the linear topology A——
B · · · · · · C —— D. We first discuss the packet collision problem. Suppose D cannot sense
A’s transmission and is out of B’s interference range. However, C is close enough to B
so that C(or B)’s transmission will corrupt B(or C)’s reception. When A is transmitting to
B, D may initiate a transmission to C. And it is the same for A when D is transmitting to
C. If one or both of A and D are transmitting DATA frames, B or C, which first finishes
the reception of the DATA frame, will return an ACK frame which does not require carrier
sensing beforehand and will corrupt the reception at the other. To alleviate this problem,
we may propose to use the short frames RTS/CTS before DATA/ACK since CTS requires
carrier sensing beforehand. However, the carrier sensing strategy of CTS frames makes the
receiver blocking problem worse.
The receiver blocking problem exists when the intended receiver does not return an
ACK frame due to a collision. It also exists for the carrier sensing strategy I even when
the intended receiver can correctly decode the received packet but could not respond due
to the carrier sense rule. For example in the previous topology, suppose that A and C
are out of each other’s sensing range, and B and C are out of each other’s interference
range and cannot corrupt each other’s received packets. However, B and C can sense each
other’s transmission. When C is transmitting a long DATA frame to D, A may initiate a
transmission to B by a RTS frame. Since B senses a busy channel, it does not return a CTS
frame so that A keeps retransmitting the RTS frame. A also doubles its contention window
size for each failure of RTS transmission and has lower channel access probability. If C
has a lot of data packets destined to D and occupies the channel for a long time, A will be
starved. Apparently this is unfair to A due to the MAC contention.
To alleviate these problems as much as possible for the bidirectional handshakes, it
is necessary to be more conservative to set the carrier sensing range than the interference
model in Fig.6–1especially for the interference model at the worst case. First, to address
the packet collision problem, we need to consider the interferences from nodeDi (1 6
138
i 6 6) which can be at mostdt closer toD0 thanNi. Denote the new value of X asX.
Following the similar procedures in Section6.2.1, the SINR at the worst case satisfies
1SINR
= 1
( bX−2)γ + 1
bXγ+ 2 r� bX
2−1�2
+�√
3 bX2
�2−1
!γ
+ 2 r� bX2
+1�2
+�√
3 bX2
�2−1
!γ + PN
P0
(6.16)
The numerical results show thatX can be well approximated byX + 1 with less than1%
error when SINR is larger than -3dB:
X ∼= X + 1 (6.17)
For example in the previous four-node topology, the two receivers B and C are thus far
enough from each other and the ACK frames cannot corrupt each other’s reception. Second,
when RTS/CTS are used, using the carrier sensing strategy II for CTS/DATA/ACK frames
to address the receiver blocking problem and RTS still adopts the carrier sensing strategy
I. For example in the same topology as before, B is far enough from the transmitting node
C and can correctly decode a RTS or DATA frame from A. Moreover, it does not prevent
from returning a CTS or ACK frame. The new carrier sensing range is shown in Fig.6–6.
Apparently, the cost is to aggravate the exposed terminal problem and sacrifice the
spatial reuse in a more general topology. However, the packet collision and dropping due
to hidden terminals and blocked receivers have been significantly improved. Moreover we
expect a more stable performance for higher layer protocols, such as much less retrans-
mission and timeouts for TCP traffic, and much less false link failures and unnecessary
rerouting activities, and the unfairness due to these two problems can also be greatly alle-
viated.
Other problems for the large carrier sensing range are as follows. First, it requires
a small sensing threshold. We do not know the achievable carrier sensing sensitivity of
the current products. In the following studies, we assume that the current products or
139
A B
Xdt ( 1)X dt−
CD
( 1)X dt+F E
Figure 6–6:Large carrier sensing range with carrier sensing strategy II for CTS/ACK
future technologies could support the small carrier sensing threshold, i.e.,Tlcs times more
sensitive than the original value. The new value ofTcs is Tcs and equals
Tcs = Tcs/Tlcs, Tlcs = (X
X)γ (6.18)
Whenγ = 3, Tlcs = 5.28, 3.75 and2.91dB for X = 2, 3 and4, respectively. Second,
the larger carrier sensing range means that there may exist more nodes contending for the
shared channel. The collision probability like that in wireless LANs increases with the
number of active nodes [160, 154].
To address the collision problem in one carrier sensing range, there are already sev-
eral methods. First, four-way handshake instead of two-way handshake could be used to
reduce the long collision periods of DATA frame transmissions if the collision probability
is high and the DATA frame is long. Second, some schemes [149, 148, 150, 27, 26, 164]
controlling the traffic delivered to the MAC layer according to the channel status can be
used to efficiently reduce the collision probability. Third, we can maintain the value of
X but reduce both the carrier sensing range and transmission range in order to reduce the
node density in each carrier sensing range.
140
6.2.7 Optimum Carrier Sensing Range
In short, large carrier sensing range with appropriate sensing strategies for different
MAC frames can efficiently address the hidden terminal problem and receiver blocking
problem but aggravate the exposed terminal problem and decrease the spatial reuse. The
optimum carrier sensing range with a radiusd∗c = X∗dt must balance the impact of both
collisions and spatial reuse, where
X∗ = µX = µ(X + 1)(0 < µ 6 1) (6.19)
Simulation studies with considerations of all previously aforementioned factors will be
conducted in Section6.4to identify the optimum value of carrier sensing threshold.
6.3 Utilize Multirate Capability of 802.11 in Wireless Multihop Ad Hoc Networks
In this section, we study the impact of multihop forwarding on the optimum carrier
sensing threshold as well as how to maximize the spatial reuse ratio for multihop flows
when multiple rates coexist in wireless multihop ad hoc networks. Different from the
previous section where the objective is to maximize the aggregate one-hop throughput, the
end-to-end performance in terms of delay, throughput and energy consumption of multihop
flows deserves more attention. We first discuss how to set the carrier sensing threshold
for a multirate wireless ad hoc networks. Then we study the impact of different channel
rates on the end-to-end performance. According to the analysis and the carrier sensing
model in the previous section, the optimum end-to-end throughput and the corresponding
carrier sensing threshold are derived for a multihop flow. Finally we propose to utilize
the bandwidth distance product as a metric to determine the forwarding nodes of multihop
flows to maximize the spatial reuse ratio when multiple rates coexist.
6.3.1 How to Set the Carrier Sensing Threshold for Multirate 802.11 MAC protocol
In this chapter, we argue that the multirate 802.11 MAC protocol should adopt a sin-
gle carrier sensing threshold for all channel rates for three reasons. First, a single carrier
sensing threshold keeps the Physical/MAC protocols simple. The achievable channel rate
141
A B C
D
Figure 6–7:Multiple carrier sensing thresholds may result in collisions
is subject to distance, mobility and channel fading and is time variable. Hence multiple
carrier sensing thresholds for different channel rates may greatly increase the complex-
ity of the protocols. Second, as discussed above, the optimum carrier sensing thresholds
do not change much for different channel rates. A single threshold will not sacrifice the
performance much. Third, multiple carrier sensing thresholds may introduce additional
collisions. For example in Fig.6–7, a transmit-receive pair A and B, which have a large
carrier sensing range corresponding to a certain channel rate, senses an idle channel and
then A transmits the DATA frames. During the transmission period, another transmitter C
in the previous transmitter’s sensing range also senses an idle channel due to a smaller car-
rier sensing range. The new transmission from C may introduce a collision at the previous
intended receiver B.
With a common carrier sensing threshold, the receive thresholdRXth must be set
appropriately. First,RXth must be larger than or equal to the receiver sensitivityRXse
required by the adopted channel rate. Second, to alleviate collisions as much as possible,
one more requirement may be enforced, i.e., the power level of the received signal must
be larger than or equal toCSthTcs. According to these two requirements, we can set the
common carrier sensing thresholdCS∗th as
CS∗th = max(
RXse(i)Tcs(i)
)
RXth(i) = CS∗thTcs(i) > RXse(i)(6.20)
142
wherei is the index of different channel rates.
6.3.2 How to Choose Next Hops, Channel Rates and Set the Carrier Sensing Thresh-old for Multihop Flows
For a single hop network like a wireless LAN, it seems simple to maximize the end-
to-end throughput and minimize the end-to-end delay for a flow. The solution is just to use
the highest achievable channel rate between the source and the destination. However, if
there exist some users far away from the access point or their intended receivers, only very
low channel rates are available. Deploying a relay access point at an appropriate place or
utilizing another user as a forwarding node can utilize higher channel rates over multiple
hops instead of a low channel rate over one single hop and hence may achieve much better
performance.
In wireless multihop ad hoc networks, destinations are often out of the sources’ trans-
mission range and packets need to be forwarded through multiple hops before reaching
the destinations. Selecting the next hop with the highest channel rate can increase the
throughput at each hop. However, packets must travel through more hops due to the short
transmission range of high channel rates and hence the end-to-end delay and throughput
are not necessarily improved. To determine the best candidate of the next hop, it is neces-
sary to introduce a metric consisting of information of both channel rates and hop distances.
1) End-to-end transmission delay and energy efficiency
Suppose there is a perfect packet scheduling algorithm. Queueing delay is equal to
zero and the MAC layer backoff period is also decreased to a minimum value and can be
ignored. Thus the end-to-end delayte2e is equal to the summation of transmission delayth
at all hops.
te2e =∑
i ∈ {all hops}th(i) (6.21)
143
wherei is the index of hops along the path. And the hop transmission delayth is
th = Tpreamble +LH + Lpl
rc
(6.22)
Suppose RTS/CTS/ACK are transmitted with the basic rate and DATA is transmitted with
the selected channel raterc, then
Tpreamble = (TRTS + TCTS + 2TSIFS)ϕ+
SIFS + DIFS + 2Tphy + TACK
ϕ =
1, (if RTS/CTS are used)
0, (if RTS/CTS are not used)
(6.23)
To determine the efficiency of each hop (or that of the candidates at each hop), we
define abandwidth distance productBDiP for each hop as the achievable data raterd times
the hop distancedh at that hop, then
BDiP = rd × dh =Lpl
Tpreamble +LH+Lpl
rc
dh (6.24)
Theper meter transmission delaytm for the hop is equal to
tm =thdh
=Lpl
BDiP(6.25)
End-to-end delay is the summation of transmission delay at all forwarding nodes. If the
path is a regular chain where each hop has the same distance and the total path length isdp,
then the end-to-end transmission delayte2e is equal to
te2e = tmdp =Lpldp
BDiP(6.26)
Normally, dp is proportional to the distancedsd between the source and the destination.
Suppose it has a relatively fixed value, then the end-to-end delay is inversely proportional
to the bandwidth distance product BDiP.
144
0 100 200 300 400 500 600 7000
1000
2000
3000
4000
5000
Hop distance (m)
BD
iP (
Mbp
s X
m)
rd=rcrd(1000bytes)
1Mbps
6Mbps
11Mbps 18Mbps
54Mbps
Figure 6–8:Bandwidth distance product
In this chapter, we assume a common transmission powerPt for all channel rates.
Thus the aggregate transmission power consumptionE for each packet is
E = Pt ×∑
i ∈ {all hops}(th(i)− TSIFS − TDIFS − 2ϕTSIFS) (6.27)
SinceTSIFS andTDIFS are much smaller thanTDATA, minimizing the end-to-end delay is
almost equivalent to minimizing the end-to-end energy consumptionE.
Therefore, to minimize the end-to-end delay and energy consumption, we should se-
lect the candidate with the highest value of BDiP as the next hop if other conditions are
the same. Fig.6–8 shows the bandwidth distance product for several channel rates. To
plot the figure, the advertised transmission ranges for outdoor environments of one Cisco
product [32] are used. They are 76, 183, 304, 396, and 610m for 54, 18, 11, 6, and 1Mbps
respectively. Notice that 1 and 11Mbps are the 802.11b rates and 6, 18 and 54Mbps are the
802.11g rates. We use the default parameters defined for the 802.11b/g rates according to
the corresponding standards [69] and [70], respectively (802.11g rates have shorter pream-
bles). Two cases are considered with and without protocol overheads. For the case without
protocol overheads,rd = rc. For the case with protocol overheads, two-way handshake
DATA/ACK and 1000 bytes payload size are used.
145
0 100 200 300 400 500 600 7000
0.5
1
1.5
2
2.5
3
Hop distance (m)
Max
imum
end
-to-
end
thro
ughp
ut (
Mbp
s)
rd=rc
rd(1000bytes)
54Mbps
18Mbps 11Mbps
6Mbps
1Mbps
Figure 6–9:Maximum end-to-end throughput for different hop distance
Two important observations can be found from Fig.6–8. First, larger protocol over-
heads result in smaller BDiP. Second, higher channel rates do not necessarily generate
larger BDiP. The maximum value of BDiP is closely related to both hop distance and pro-
tocol overheads in addition to the achievable channel rate.
2) End-to-end throughput and spatial reuse
In wired networks, the maximum end-to-end throughput of one multihop flow can
be determined by the bottleneck link with the smallest available bandwidth for the flow.
However, the issue becomes much more complex due to the shared channel in wireless
networks. We must consider channel rate, transmission distance at each hop as well as the
carrier sensing range and spatial reuse.
To maximize the end-to-end throughput of a multihop flow, it is necessary to maxi-
mize the spatial reuse along the path, i.e., to schedule as many concurrent transmissions
at different hops as possible. There are two requirements to schedule successful concur-
rent transmissions. First, two neighboring transmitters along the path must be at leastdc
away from each other so that there is only one transmitter in each carrier sensing range to
satisfy the carrier sensing requirement. Second, the concurrent transmissions at upstream
and downstream nodes cannot introduce enough interference to corrupt the reception at the
considered transmit-receive pair.
146
Let 1N
denote thespatial reuse ratio of a multihop flowwhereN is the hop distance
between two nearest concurrent and successful transmissions along the path, and
N > ddc
dh
e (6.28)
wheredxe is the ceiling function ofx and is equal to the nearest integer greater than or equal
to x. Thus, for a chain topology with a common distancedh at each hop, the maximum
end-to-end throughput for a multihop flow with at leastN hops is
Smax =rd
N6 rd
d dc
dhe (6.29)
because there can be only one successful transmission in each N hops and hence the spatial
reuse ratio for the chain topology is1N
.
The equality in the above two inequalities holds only when the carrier sensing range
is set to satisfy the second requirement discussed above. That is to say, there should be no
hidden terminal problem or receiver blocking problem as discussed in Section6.2.6due to
the transmission at N hops away along the path. This maximum end-to-end throughput is
shown in Fig. 6–9, where we supposedc = 1400m is the minimum value to satisfy the
above requirements. When protocol overheads are considered and 1000bytes payload is
used,Smax equals to 1.68, 1.79, 1.33, 1.34, 0.30Mbps for 54, 18, 11, 6, 1Mbps at their
corresponding maximum hop distance, respectively. It verifies that higher channel rates do
not necessarily generate higher end-to-end throughput. It is closely related to the achievable
channel rate, hop distance, carrier sensing range and protocol overheads.
As discussed in Section6.2, the optimum carrier sensing range may allow a certain
level of hidden terminal problem to balance the impact of exposed terminal problem. In
this case, Equations (6.28) and (6.29) only provide a lower bound for N and an upper
bound for the maximum end-to-end throughput. To accurately calculate the maximum
end-to-end throughput,N should be recalculated with the requirement that the concurrent
scheduled transmissions should not introduce enough interference to each other to corrupt
147
0 10 20 300
4
8
12
16
20
(a) SINR (dB)
X
γ=2γ=3γ=4γ=5
0 10 20 300
4
8
12
16
20
(b) SINR (dB)
X'
γ=2γ=3γ=4γ=5
Figure 6–10:Spatial reuse ratio for multihop flows (a) at worst case, (b) in a single chaintopology with one way traffic
the receptions. Thus,N is determined by the requirement of SINR and the locations of the
sources and forwarding nodes. Here we use the interference model used in Equation (6.16)
for the worst case with bidirectional handshakes, and
N 6 dX dt
dh
e, (dh 6 dt) (6.30)
Smax > rd
dX dt
dhe, (dh 6 dt) (6.31)
If all hop distances equal the same value or the hop distances are set asdh, these two
equations can be simplified asN 6 dXe and Smax > rd
d bXe , and rd
d bXe is the maximum
achievable end-to-end throughput for a multihop flow with at leastdXe hops under the
used interference model. Thus,dXe can represent the spatial reuse ratio for multihop
flows.
Generally, there are less interferences than the worst case. For a regular chain topol-
ogy with a common hop distance, if we only consider the interference from one nearest
upstream transmission and one nearest downstream transmission, and letX ′ denote the
value ofX, Equation (6.16) becomes
1
SINR=
1
(cX′−1)γ + 1
cX′γ + PN
P0(one way traffic)
1
(cX′−2)γ + 1
cX′γ + PN
P0(two-way traffic)
(6.32)
148
where SINR is worse for the case of two-way traffic because the receiver of the concur-
rent downstream transmission can be closer than its intended transmitter to the considered
receiver. Thus the aggregate end-to-end throughput for two-way traffic can be lower than
one-way traffic. However, if an optimum packet scheduler is possible to schedule for-
warding traffic at one time and reverse traffic at another time, the aggregate end-to-end
throughput of two-way traffic can be as high as that of one way traffic. Thus only the case
of one-way traffic is discussed thereafter.
Since a smaller value ofX thanX ′ dose not help increase the throughput due to the
requirement of SINR and only results in collisions due to the hidden terminal problem.
Therefore,X ′ is the optimum value ofX for a multihop flow in a regular chain topology
and
N = dX ′ dt
dhe > dX ′e, (dh 6 dt)
Smax = rd
dcX′ dtdhe 6 rd
dcX′e , (dh 6 dt)(6.33)
Therefore the achievable maximum end-to-end throughput of a multihop flow isrd
dcX′e when
dh = dt. Notice thatX ′ may not be the optimum value ofX that should satisfyX 6 X in
a general topology, and depends on many factors as we have discussed in Section6.2.
Fig. 6–10shows bothX and X ′ along with different requirement of SINR. When
SINR = 10dB andγ = 4(for distant field) which are the default settings in ns2, the spatial
reuse ratio is13
and hence the maximum end-to-end throughput is13
of the bandwidth for a
chain topology with at least 3 hops. This is larger than the findings in the papers [91, 162]
which shows the spatial reuse ratio is14. There are two reasons for the throughput loss.
First, these papers study the four-way handshake RTS/CTS/DATA/ACK and the throughput
suffers from the receiver blocking problem as we discussed in Section6.2.6. Second, these
papers use ns2 for simulation studies and a MAC frame is discarded if there is already an
interference when receiving the first bit of the frame even when SINR is high enough in
the current version of ns2. Fig.6–10also shows that largerγ can achieve better spatial
reuse ratio and hence higher end-to-end throughput because the interference vanishes more
149
quickly along with the distance. However, largerγ results in shorter transmission distance,
and hence requires more forwardings and consumes more energy for each packet to reach
the destination.
Furthermore,Smax in the above equations only consider multihop flows with at least
N hops. For a multihop flow with fewer hops, we have
Smax =rd
nh
(nh 6 N) (6.34)
wherenh is the number of hops of the multihop flow.
In short, to maximize the end-to-end throughput of a multihop flow, it is necessary
to select a node with the highest value ofrd
Nas the downstream forwarding node if other
conditions are the same. The optimum value ofN depends on the locations of interfer-
ing transmitters and hence is not easy to calculate for an irregular topology. However, we
know that the optimum value of carrier sensing distancedc = Xdt for different channel
rates does not have much difference. From Equation (6.33), we can see thatSmax is ap-
proximately proportionally to the bandwidth distance productBDiP = rddh. ThusBDiP
can be utilized to approximately represent the efficiency of throughput, delay and energy
consumption at each hop. We will evaluate the efficiency of this metric to maximize the
end-to-end throughput and compare it with the shortest hop algorithm for a multirate net-
work through simulations in next section.
6.4 Simulation Studies
In this section, we conduct ns2 simulations to study the impact of carrier sensing
range on the system performance and to identify the optimum carrier sensing range. We
also illustrate that how the carrier sensing range and spatial reuse impact the maximum
throughput of multihop flows and how the bandwidth distance product is used as a metric
to select the forwarding nodes to optimize the performance in a multirate network.
150
6.4.1 NS2 Extensions and Simulation Setup
We have developed several important extensions to ns2 to obtain more accurate results.
First, the interferences are added up instead of checking them one by one when determining
the SINR. Second, the incoming signal is regarded decodable even when the node senses
a busy channel at time instant of the first bit of the incoming frame if the SINR is high
enough. Originally, ns2 only considers the capture effect when the interference comes
after the intended signal. Third, ns2 is extended to support multiple channel rates, i.e., the
MAC layer has the appropriate settings of a SINR requirement, a receiver threshold and
a transmission range for each channel rate. Fourth, the extensions provide an option not
to sense the channel before sending a CTS or a DATA frame to a correctly received RTS
or CTS frame when RTS/CTS are used. Originally, ns2 discards a successfully received
RTS frame if the channel is sensed busy. We denote this option asCSSII(carrier sensing
strategy II as discussed in Section6.2.6) in the following subsections.
We adopt the requirements of SINR and receiver sensitivities in Table6–1 unless
otherwise indicated, and the receiver threshold is set as the value of the corresponding
receiver sensitivity for each channel rate. The default two ray ground propagation model
in ns2 is used, i.e., the path loss exponentγ = 2 when the distance is less than 86m and
γ = 4 otherwise, and the transmit power is set as 6dBm. The transmission ranges are hence
determined. In the simulations, the channel rates 54, 36, 18, and 6Mbps are studied, and
their transmission radii are 89, 119, 178, and 238m, respectively. The IEEE 802.11a [71]
protocol parameters are adopted in the simulations.
6.4.2 Optimum Carrier Sensing Range
In this subsection, we try to identify the optimum carrier sensing range. In the simu-
lations, there are total150 nodes randomly distributed in a 1000m× 1000m topology.
First we identify the optimum carrier sensing thresholdCSth for one-hop flows. In
the simulation, each node randomly selects one neighbor as the destination of one TCP
connection. Notice that the neighborhood is smaller for a higher channel rate due to its
151
smaller transmission range. Fig.6–11shows that the aggregate throughput achieves the
maximum value whenCSth is in the range of[61, 76]dBm for all channel rates. However,
suchCSth is even less thanRXth for several channel rates. Apparently, it starves the flows
whose source destination distance is close to the transmission radius as found from the
more detailed simulation results.
We also identify the optimum carrier sensing thresholdCSth if multihop flows exist.
In the simulation, there are total 20 TCP connections. The sources and the destinations
are randomly selected under the condition that the distance between the source and the
destination ranges from 500 to 600m. The distance condition is used instead of the hop
number because we also want to check the efficiency of different channel rates to deliver
traffic over the same distance and higher channel rates often travel more hops to reach
the same destination. Fig.6–12shows that the aggregate end-to-end throughput achieves
the maximum value whenCSth is around 91dBm for all channel rates. When theCSth
is less than the receiver thresholdRXth, the end-to-end throughput is almost zero. This
is because that some hop distances approach the maximum transmission distance, which
leads to disconnections of these hops.
There are several important observations from these results. First, to determine the
optimum carrier sensing range in the multihop ad hoc networks, it is not enough to examine
the performance of one-hop flows and the impact of multihop forwarding must be carefully
studied. Second, a single carrier sensing threshold could be optimum for all channel rates.
Third, a higher channel rate does not necessarily generate a higher throughput. We must be
careful to utilize the multirate capability in the multihop environment which will be further
studied in next subsection. These observations verify our earlier analytical results in this
chapter.
152
50 55 60 65 70 75 80 85 90 95 1000
50
100
150
200
CSth (dBm)
Agg
rega
te t
hrou
ghpu
t (M
bps)
6Mbps18Mbps36Mbps54Mbps
Figure 6–11:Optimum carrier sensing threshold for one-hop flows
55 60 65 70 75 80 85 90 95 100 1050
1
2
3
4
5
CSth (dBm)Agg
rega
te e
nd-t
o-en
d th
roug
hput
(M
bps)
6Mbps18Mbps36Mbps54Mbps
Figure 6–12:Optimum carrier sensing threshold for multi-hop flows
153
6.4.3 Spatial Reuse and End-to-End Performance of Multihop Flows
In this subsection, we first verify that the maximum spatial reuse ratio of a regular
chain topology is13
instead of14
using the default parameters of ns2 where the SINR re-
quirement is 10dB. The hop distance is set as the maximum transmission distance and
the channel data rate is 6Mbps. The maximum throughput is found by gradually increas-
ing the carrier sensing threshold and the rate of CBR traffic from the source. As long as
12dB < RXth
CSth< 19dB, i.e., 2 < X < 3, the maximum throughput can be achieved.
LargerCSth results in more collisions due to the hidden terminal problem and hence lower
throughput. SmallerCSth results in lower spatial reuse ratio, hence lower throughput.
When two-way handshake DATA/ACK is used, the maximum end-to-end throughputs are
5.17, 2.52, 1.71, 1.68, 1.68, 1.68, 1.67, 1.67Mbps for 1 to 8 hops regular chain topolo-
gies, respectively. When four-way handshake RTS/CTS/DATA/ACK and CSSII are used,
they are 4.95, 2.41, 1.63, 1.61, 1.61, 1.61, 1.60, 1.59Mbps respectively. It verifies that the
maximum end-to-end throughput of a multihop flow with at least 3 hops is13
of the one-
hop flow’s throughput with the 10dB SINR requirement. The slightly decreasing through-
puts are due to the increasing impact of randomness of the backoff periods on the packet
scheduling at the MAC layer along with the chain length. Thus, the results verify that13
in-
stead of14
is the optimum spatial reuse ratio for the simulated settings.14
obtained in other
papers is due to the receiver blocking problem, ns2 implementation and carrier sensing
strategy used by the IEEE 802.11 protocols as discussed in Section6.3.
To check the efficiency of bandwidth distance product BDiP on maximizing the spatial
reuse ratio and hence optimizing the end-to-end performance of multihop flows, we also
simulate a random chain topology. In the topology, there are total 30 nodes. The distance
between the source and the distance is 2000 meters. Other 28 nodes are randomly distrib-
uted between the source and the destination. Three algorithms in determining forwarding
nodes and the channel rates are compared with each other. The first one is similar to the
shortest hop algorithm, i.e., selecting the farthest reachable node and using the highest
154
achievable rate between this node and the transmitter. The second one selects the farthest
node among those with a same highest channel rate as the forwarding node. The third one
selects the node with the highest value of BDiP as the downstream forwarding node at each
hop. They are referred asAdr (first consider the distance, then the rate),Ard (first con-
sider the rate, then the distance) andABDiP (maximize the bandwidth distance product),
respectively, in the following discussions.
The maximum end-to-end throughputs are achieved whenPt
CSthis around 97∼101dB
(CSth = 91 ∼ 95dBm) for all three algorithms, and they are 1.64, 1.87 and 2.08Mbps
for Adr, Ard andABDiP, respectively. The improvement ofArd andABDiP over Adr
are14% and27%, respectively. Notice that these simple algorithms are only used to show
the advantages of bandwidth distance product as a routing metric. Similar to bandwidth,
bandwidth distance product is a link-based metric. Therefore, some more sophisticate
routing algorithms, such as the widest path routing algorithm, can be adopted to use it as a
routing metric to route around obstacles and to compute a loop-free path in a more general
topology.
6.5 Conclusions
In this chapter, we analyze the impact of several important factors on the optimum
carrier sensing threshold in the multirate and multihop wireless ad hoc networks. Several
key observations are listed as follows:
• Multihop property must be considered to decide the optimum carrier sensing thresh-
old. The optimum carrier sensing threshold for one-hop flows does not work for
multihop flows.
• Different channel rates have similar optimal carrier sensing thresholds. Therefore, a
single carrier sensing threshold for different rates could be efficient as well as simple.
• Higher channel data rate does not necessarily generate higher throughput. We need
to be careful to utilize the multirate capability.
155
• Shortest hop routing algorithm is not appropriate for multirate and multihop wireless
ad hoc networks. Simulation results show that the algorithmsArd (first consider the
rate, then the distance) andABDiP (maximize the bandwidth distance product) can
improve the throughput by14% and27%, respectively. Hence, the results demon-
strate that bandwidth distance product could be a good routing metric in multirate ad
hoc networks.
• Maximum end-to-end throughput is derived for a multihop flow under a certain re-
quirement of SINR. The maximum throughput can be achieved only when the carrier
sensing threshold is appropriately set. Current ns2 version fails to do so. Several ns2
extensions have been developed to achieve the maximum throughput. For example,
the maximum spatial reuse ratio of a multihop flow is13
instead of14
for the 10dB
SINR requirement.
CHAPTER 7A DUAL-CHANNEL MAC PROTOCOL FOR MOBILE AD HOC NETWORKS
IEEE 802.11 MAC protocol has been the standard for Wireless LANs and is also
implemented in many simulation software for mobile ad hoc networks. However, IEEE
802.11 MAC has been shown to be quite inefficient in the multihop mobile environments.
Besides the well-known hidden terminal problem and the exposed terminal problem, there
also exists the receiver blocking problem which may result in link/routing failures and
unfairness among multiple flows. Moreover, the contention and interference from the up-
stream and downstream nodes seriously decrease the packet delivery ratio of multihop
flows. All these problems could lead to the “explosion” of control packets and poor
throughput performance. In this chapter, we first analyze these anomaly phenomena in
multihop mobile ad hoc networks. Then, we present a novel effective random medium
access control (MAC) protocol based on IEEE 802.11 MAC protocol. The new MAC
protocol uses an out-of-band busy tone and two communication channels, one for control
frames and the other for data frames. The newly designed message exchange sequence pro-
vides a comprehensive solution to all the aforementioned problems. Extended simulations
demonstrate that our scheme provides a much more stable link layer, greatly improves the
spatial reuse, and works well in reducing the packet collisions. It improves the throughput
by 8% to 28% for one-hop flows and by 2∼5 times for multihop flows under heavy traffic
comparing to the IEEE 802.11 MAC.
7.1 Introduction
Contention based medium access control (MAC) protocols have been widely studied
for wireless networks due to the low cost and easy implementation. IEEE 802.11 MAC
[68] is such a protocol that has been the standard of wireless LANs and has also been in-
corporated in many wireless testbeds and simulation packages for mobile ad hoc networks.
156
157
It adopts four-way handshake procedures, i.e., RTS/ CTS/ DATA/ ACK. Short packets,
RTS and CTS, are used to avoid collisions between long data packets. The NAV (Network
Allocation Vector) value carried by RTS/ CTS/ DATA/ ACK is used to reserve the medium
to avoid potential collisions (i.e., virtual carrier sensing) and hence mitigate the hidden ter-
minal problem. The ACK is used as a confirmation of the successful transmission without
errors.
However, the effectiveness of IEEE 802.11 MAC in multihop mobile ad hoc networks
has been widely recognized as a serious problem. The packet collision over the air is much
more severe in the multihop environments than that in the wireless LANs [68, 150, 160,
161, 153]. The packet losses due to such kind of MAC layer contentions will definitely
affect the performance of the high layer networking schemes such as the TCP congestion
control and routing maintenance because a node does not know whether an error is due to
the collision or the unreachable address [19, 108, 140, 161, 153, 28, 29, 147, 162].
The source of the above problems comes mainly from the MAC layer. The hidden
terminals introduce collisions and the exposed terminals lead to low spatial reuse ratio.
Besides these two notorious problems, the receiver blocking problem, i.e., the intended
receiver does not respond to RTS or DATA due to the interference or virtual carrier sens-
ing operational requirements from other ongoing transmissions, also deserves a serious
attention. This problem becomes more severe in the multihop environments and results in
packet dropping, starvation of some traffic flows or nodes, and network layer re-routing,
which we will elaborate later in section7.3. Furthermore, for multihop flows, the con-
tentions or interferences from the upstream and downstream nodes and other flows could
lead to poor packet delivery performance.
There are many schemes proposed in the current literature to reduce the severe colli-
sions of DATA packets at MAC layer. BTMA [122] uses a busy tone to address the hidden
terminal problem. The base station broadcasts a busy tone signal to keep the hidden ter-
minals from accessing the channel when it senses a transmission. It relies on a centralized
158
network infrastructure which is not applicable in mobile ad hoc networks. FAMA-NCS
[47] uses the long dominating CTS packets to act as the receive busy tone to prevent any
competing transmitters in the receiver range from transmitting. This requires any nodes
hearing interference keep quiet for the period of one maximum data packet to guarantee no
collisions with the ongoing data transmission, which is obviously not efficient especially
when the RTS/CTS negotiation process fails or the DATA packet is very short.
Some multi-channel schemes based on random access have also been investigated in
the last few years. One common approach to avoid the collisions between control packets
and data packets is to use separate channels for different kinds of packets. DCA [136]
uses one control channel for RTS/CTS and one or more data channels for DATA/ACK. It
presents one method to utilize multiple channels but does not solve the hidden terminal
problems. Dual busy tone multiple access (DBTMA) schemes [59, 60, 135] handles the
hidden terminal and exposed terminal problems. It uses the transmit busy tone to prevent
the exposed terminals from becoming new receivers, the receive busy tone to prevent the
hidden terminals from becoming new transmitters, and a separate data channel to avoid
collisions between control packets and data packets. DBTMA, however, does not consider
the ACK packets which, if used, may result in collisions with the DATA packets while the
acknowledgment (ACK) is needed for the unreliable wireless links. PAMAS [117] uses a
separate control channel to transmit both RTS/CTS packets and busy tone signals. It gives a
solution to the hidden terminal problem and mainly focuses on power savings. MAC-SCC
[92] uses two Network Allocation Vectors (NAVs) for the data channel and the control
channel, respectively. The two NAVs make it possible for the control channel to schedule
not only the current data transmission but also the next data transmission. Although it
reduces the backoff time, it does not address the aforementioned problems.
To the best of our knowledge, there are no comprehensive study and good solutions
to all the hidden terminal problem, the exposed terminal problem, the receiver blocking
problem, and the intra-flow and inter-flow contention problems. All of them contribute to
159
the poor performance of MAC protocol in the multihop wireless mobile ad hoc networks.
Most of the current schemes aggravate the receiver blocking problem while alleviating the
hidden terminal problem and do not fully address the problems of multihop flows in the
mobile ad hoc networks.
In this chapter, we utilize two channels (dual-channel) for control packets and DATA
packets, separately. RTS and CTS are transmitted in a separate control channel to avoid the
collisions with data packets. Negative CTS (NCTS) is used to solve the receiver blocking
problem and is also transmitted in the control channel. An outband receiver-based busy
tone [59] is used to solve the hidden terminal problem. We do not use ACK here because
there is no collision to the ongoing DATA packet. To address the packet error due to
the imperfect wireless channel, we introduce Negative Acknowledgment (NACK) signal, a
continuing busy tone signal, when the receiver determines that the received DATA packet is
corrupted and in error. The sender will not misinterpret this NACK signal because there are
no other receivers in its sensing range and hence no interfering NACK signals, and it will
assume that the transmission is successful if no NACK signal is sensed. Furthermore, our
protocol has an inherent mechanism to solve the intra-flow contention and could achieve
optimum packet scheduling for chain topology. It turns out that this protocol has solved
almost all aforementioned problems and does not require synchronized transmission at the
MAC layer as in the papers [7, 118].
The rest of this chapter is organized as follows. Section7.2presents the basic concepts
of the physical model which are important to design the MAC protocol. Then, Section7.3
elaborates the source of collisions in the IEEE 802.11 MAC protocol when applied in the
multi-hop mobile ad hoc networks and the ideal protocol behavior we may desire. Section
7.4 describes the new MAC protocol for multihop mobile ad hoc networks. Simulation
results are given in Section7.5. Finally, we conclude the chapter in section7.6.
160
7.2 Background
7.2.1 Physical Model
In wireless networks, the signal to noise plus interference ratio (SINR) must be larger
than some thresholdβ for the receiver to detect the received signal correctly.
SINRi =Pi∑
k 6=i
Pk + N> β (7.1)
The received powerPr:
Pr = Po
(do
d
)α
, (7.2)
wheredo is the reference distance andPo is the received power at the reference distance.
α > 2 is the power-loss exponent. In the following discussions, we assume all nodes use
the same transmission power.
7.2.2 Transmission Range and Sensing/Interference Range
In the transmission range, the receiver should be able to correctly demodulate (or
decode) the signal when there is no interference, i.e., the received powerPr must be larger
than a thresholdRXThresh, which defines the maximum transmission distance, called the
transmission range,
dt = do
(Po
RXThresh
)1/α
. (7.3)
If there is interference from another transmission at the receiver, the power of the
interference signalPi must be smaller enough than that of the intended signalPr, i.e.
Pi ∗ CPThresh < Pr, whereCPThresh > 1 is the capture threshold. So
di = dr
(Pr
Pi
)1/α
> dr × CP1/αThresh = ∆c × dr, (7.4)
wheredi is the distance from the interference source to the receiver, anddr is the distance
from the sender to the receiver. The quantity∆c = CP1/αThresh > 1 defines a zone where
other transmissions will interfere the receiving activities.
161
When the receiver is maximum transmission distancedt away from the sender, the
minimum interference distance,dimin, which allow correct demodulation at the receiver
and the interference powerPimin are
dimin = ∆c × dt, Pimin = Pt
(dt
dimin
)α
=Pt
CPThresh
. (7.5)
So the sender should be able to sense the interference with power levelPimin before
transmission, i.e., the interference fromdimin away, to avoid potential interference to other
ongoing transmission. Considering the probability that there are more than one interfering
transmissions in the neighborhood of the intended receiver, the sensing rangeds should be
even larger thandimin, i.e.,
ds = ∆s × dt, ∆s > ∆c, (7.6)
which can guarantee correct reception at the receiver if it senses the channel idle in spite of
the possible interferences from multiple sources outside of the sensing range.
The sensing range is also called interference range in many literatures [91] since
other transmissions in this range may introduce enough interference to corrupt the in-
tended signal. The widely used network simulation tool ns2 implements the settings of
WaveLAN card from Lucent company. And the default values areCPThresh = 10dB,
dt = 250m, ∆c ≈ 1.78, and∆s ≈ 2.2. Some recent literatures [104, 102] about power
control schemes adoptCPThresh = 6dB, ∆c ≈ 1.41, and ∆s ≈ 2.2. Thus, it is reason-
able to assume that the radius of sensing/interference range could be more than twice of
transmission range.
7.3 Problems and The Desired Protocol Behavior
In this section, we describe a few problems in multi-hop mobile ad hoc networks when
the IEEE 802.11 MAC protocol is deployed.
7.3.1 Hidden and Exposed Terminal Problem
A hidden terminal is the one outside of the sensing range of the transmitter, but within
that of the receiver. It does not know that the transmitter is transmitting, hence may transmit
162
A B C D EF
Figure 7–1:A simple scenario to illustrate the problems
to some node, resulting in a collision at the receiving node. Fig.7–1 illustrates a simple
example, where the small circles indicate the edges of transmission range and the large
circles indicate the edges of the sensing range. D is the hidden terminal of A. It cannot
sense A’s transmission but may still interfere with B’s reception if D begins a transmission.
An exposed terminal is the one outside of the sensing range of the receiver but within
that of the transmitter. The exposed node senses the medium busy and does not transmit
when the transmitter transmits, leading to bandwidth under-utilization. In Fig.7–1, F is
the exposed terminal of A. When A is transmitting to B, F senses A’s transmission and
keeps silent. However, F can transmit to other nodes outside of A’s sensing range without
interfering with B’s reception.
In the four-way handshake procedures, RTS/CTS and DATA/ACK are bidirectional
packets exchanged. Therefore the exposed node of one of the transmitter-receiver pair is
also the hidden node of the other. Besides the hidden terminal, the exposed terminal of the
transmitter should not initiate any new transmission either during the ongoing transmission
to avoid collision with the short packets CTS or ACK. This leads to significant inefficiency
of the spatial reuse.
7.3.2 Limitations of NAV Setup Procedure
IEEE 802.11 family protocols adopt NAV setup procedure to claim the reservation of
the channel for a certain period to avoid collision from the hidden terminals. The NAV field
163
carried by RTS/ CTS/ DATA/ ACK notifies the neighbors to keep silent during a certain
period indicated by the NAV value.
NAV setup procedure cannot work properly when there are collisions. As shown in
Fig 7–1, A wants to send packets to B. They exchange RTS and CTS. If E is transmitting
when B transmits CTS to A, B and E’s transmission will collide at C, and C cannot set its
NAV according to the corrupted CTS from B.
NAV setup procedure is redundant if a node is continuously sensing the carrier. For
example, in Fig.7–1, transmission ranges of both A and B are covered by the common
area of their sensing ranges. Without collisions, C can set NAV correctly when receiving
B’s CTS. However, it can also sense A’s transmission which prevents C from transmitting
even when there is no NAV setup procedure. RTS’s NAV is not necessary either because
any node which can receive RTS correctly can also sense B’s CTS and succeeding DATA
and ACK, and will not initiate new transmission to interrupt the ongoing transmission.
NAV setup procedure does not solve the hidden terminal problems even if the receiver
can correctly receive CTS and set its NAV. In Fig.7–1, D is the hidden terminal of A
and out of transmission range of B. It cannot sense A’s transmission and cannot correctly
receive B’s CTS either. Thus, when A is transmitting a long data packet to B, D may initiate
a new transmission, which will result in acollisionat B.
7.3.3 Receiver Blocking Problem
The blocked receiver is the one which cannot respond to the RTS intended for itself
due to other transmissions in its sensing range. This may result in unnecessary RTS’s re-
transmissions and the subsequent DATA packet discarding. When it is in the range of some
ongoing transmission, the intended receiver cannot respond to the sender’s RTS according
to the carrier sensing strategy in IEEE 802.11 standard. Then the sender will retransmit
the packet. The backoff window size is doubled each time when the RTS transmission
fails and becomes larger and larger, until the sender finally discards the packet. When the
ongoing transmission finishes, the packet in the queue of the old sender will have higher
164
priority than the new one because it resets its backoff window size and has much shorter
value than that of new one. So the old sender has a high probability to continue to transmit
and the new one continues doubling the backoff window size and discards packets when
the maximum number of transmission attempts is reached. This will therefore result in
serious unfairness among flows and severe packet discarding.
For example, in Fig.7–1, when D is transmitting to E, A sends RTS to B but will
not receive the intended CTS from B. This is because B cannot correctly receive A’s RTS
due to collision from D’s transmission. Then, A keeps doubling contention window and
retransmitting until it discards the packet. If D has a burst of traffic to E, it will continuously
occupy the channel which will starve the flow from A to B.
The hidden terminal problem only makes the receiver blocking problem worse. In the
above example, even if A has a chance to transmit a packet to B, its hidden terminal D
could start transmission and collide with A’s transmission at B because D cannot sense A’s
transmission. Therefore, A almost has no chance to successfully transmit a packet to B
when D has packets destined to E.
7.3.4 Intra-Flow Contention
Intra-flow contention is the contention from the transmissions of packets at upstream
and downstream nodes along the path of the same flow. The packet at each hop along the
path may encounter collisions and be discarded. Thus, the packets which reach the last
few nodes of the path is much fewer than those at the first few nodes. And the resource
consumed by those discarded packets is wasted.
Another abnormality is that packets continuously accumulate at the first few hops of
the path. The reason is that the transmission at the first few hops encounters less contention
than that at subsequent nodes. One simple example, as shown in Fig.7–2, is the chain
topology with more than 5 hops where nodes are separated by fixed length a little less than
the maximum transmission distance. The first node is interfered by three subsequent nodes.
This number is four for the second node and 5 for the third node. This means the first node
165
0 1 2 3 4 5 6
Figure 7–2:Chain topology
could inject more packets into the chain than the subsequent nodes could forward. Li et al.
have discussed this phenomena in the paper [91] and indicated that 802.11 MAC fails to
achieve the optimum throughput for the chain topology.
7.3.5 Inter-flow Contention
Inter-flow contention happens when two or more flows pass through the same region.
The transmission of packets in this region encounters the interference and collisions not
only from the packets of its own flow but also from other flows. This region becomes the
bottleneck and could make it more severe to accumulate packets at the first few hops of the
flows than that in the scenario where there is only intra-flow contention.
7.3.6 The Desired Protocol Behavior
The desired MAC protocol for mobile ad hoc networks should resolve the hidden/exposed
terminal problem and the receiver blocking problem. It should guarantee that there is only
one receiver in the range of a transmitter and only one transmitter in the range of a receiver.
The exposed nodes can start to transmit in spite of the ongoing transmission. The hidden
nodes cannot initiate new transmissions but may receive packets. Thus, to maximize the
spatial reuse, it should allow multiple receivers in the range of any receiver to receive and
multiple transmitters in the range of any transmitter to transmit. The transmitter should
also know whether its intended receiver is blocked or is outside of its transmission range
when it does not receive the returned CTS to avoid discarding packets and the undesirable
behavior at the higher protocol layer, such as false alarms of route failures.
166
7.3.7 Limitation of IEEE 802.11 MAC Using Single Channel
The collisions between RTS, CTS, DATA and ACK are the culprits preventing the
MAC protocol from achieving the aforementioned desired behavior. The exposed terminal
cannot initiate new transmissions which may prevent the current transmitter from correctly
receiving the ACK. The hidden terminal which cannot correctly receive the CTS or sense
the transmission may initiate a new transmission which collides with the current ongoing
transmission. Furthermore, it should not become a receiver because its CTS/ACK may
introduce collisions at the receiver of the current transmission. Its DATA packet reception
may also be corrupted by the ACK packet from the current receiver. If the intended receiver
of a new transmission is in the range of the ongoing transmission, it may not be able to
correctly receive RTS and/or sense the busy medium, and hence will not return the CTS.
Thus, the intended sender cannot distinguish whether it is blocked or out of the transmission
range.
To summarize, many aforementioned problems cannot be solved if a single channel is
used in the IEEE 802.11 MAC protocol.
7.4 DUCHA: A New Dual-Channel MAC Protocol
In this section, we present the new dual-channel MAC protocol (DUCHA) for multi-
hop mobile ad hoc networks.
7.4.1 Protocol Overview
To achieve the desired protocol behavior, we utilize dual-channel for DATA and con-
trol packets, separately. DATA is transmitted over the data channel. RTS and CTS are
transmitted over the control channel. Negative CTS (NCTS) is used to solve the receiver
blocking problem and is also transmitted in the control channel. An outband receiver based
busy tone [122, 59] is used to solve the hidden terminal problem. ACK is unnecessary here
because our protocol can guarantee that there is no collision to DATA packets. To deal with
wireless channel errors, we introduce a NACK signal which is a continuing busy tone signal
when the receiver determines that the received DATA packet is corrupted. The sender will
167
RTS CTSDATA
Busy Tone
Busy Tone
NACK PeriodControl Channel
DATA ChannelBusy Tone
(If DATA packet is corrupted due to fading, busy tone signal will be lengthened. )Busy Tone
Figure 7–3:Proposed protocol
not misinterpret this NACK signal since there are no other receivers in its sensing range
and hence no interfering NACK signals. It will conclude that the transmission is successful
if no NACK signal is sensed.
Our protocol DUCHA adopts the same transmission power and capture threshold
CPThresh in both control and DATA channels. And the transmission power level for correct
receivingRXThresh is also the same for the two channels so that the two channels have the
same transmission and sensing range. The basic message exchange sequence is shown in
Fig. 7–3.
7.4.2 Basic Message Exchange
RTS
Before initiating a new transmission of an RTS, any node must sense the control chan-
nel idle at least for DIFS long and sense no busy tone signal. If it senses the noisy (busy)
control channel longer than or equal to the RTS period, it should defer long enough (at least
for SIFS + CTS + 2× max-propagation-delay) to avoid possible collision to the CTS’s re-
ception at some sender. For example, in Fig.7–1, when A finishes transmitting its RTS
to B, F should wait at least long enough for A to finish receiving the possible CTS/NCTS
from B.
CTS/NCTS
Any node correctly receiving the RTS should return CTS after SIFS spacing regardless
the control channel status if the DATA channel is idle. If both control and DATA channels
are busy, it ignores the RTS to avoid possible interference to the CTS’s reception at other
RTS’s transmitter. If control channel has been idle for at least one CTS packet long and the
168
DATA channel is busy, it returns NCTS. The NCTS provides the estimate for the remain-
ing DATA transmission time in its duration field according to the difference between the
transmission time of maximum DATA packet and the length it has sensed a busy medium
in the DATA channel.
DATA
RTS’s transmitter should start DATA transmission after correctly receiving the CTS if
no busy tone signal is sensed. If the sender receives an NCTS, it defers its transmission
according to the duration field of NCTS. Otherwise, it assumes there is a collision, will
then double its backoff window and defer its transmission.
Busy Tone
The intended receiver begins to sense the data channel after it transmits CTS. If the
receiver does not receive signal with enough power in the data channel in the due time that
the first few bits of the DATA packet reaches it, it will assume that the sender does not
transmit DATA and finish the receiving procedure. Otherwise, it transmits busy tone signal
to prevent possible transmissions from hidden terminals.
NACK
The intended receiver has a timer to indicate when it should finish the reception of
the DATA packet according to the duration field in the previously received RTS. If the
timer expires and has not received the correct DATA packet, it assumes that the DATA
transmission fails and sends NACK by continuing the busy tone signal for an appropriate
period. If it correctly receives the DATA packet, it stops the busy tone signal and finishes
the receiving procedure.
The sender assumes that its DATA transmission is successful if there is no NACK
signal sensed during the NACK period. Otherwise, it assumes that its transmission fails
because of wireless channel error and then starts the retransmission procedure.
In addition, during the NACK period besides the DATA transmission period any other
nodes in the sensing range of the sender are not allowed to become the receiver of DATA
169
packets, and any other nodes in the sensing range of the receiver are not allowed to become
the sender of DATA packets. This is to avoid confusion between NACK signals and the
normal busy tone signals.
In the above message exchange, our protocol transmits or receives packets in only one
channel at any time. We only use receive busy tone signal and not transmit busy tone signal.
So it is necessary to sense the DATA channel before transmitting CTS/NCTS packets to
avoid becoming a receiver in the sensing range of the transmitters of some ongoing DATA
packet transmissions.
7.4.3 Solutions to the Aforementioned Problems
In the following discussions, we illustrate with examples how DUCHA solves those
well-know problems.
Solution to the hidden terminal problem
As shown in Fig.7–1, B broadcasts busy tone signal when it receives DATA packet
from A. The hidden terminal of A, i.e., D, could hear B’s busy tone signal and thus will not
transmit in the DATA channel to avoid interference with B’s reception. Thus, the busy tone
signal from the DATA’s receiver prevents any hidden terminals of the intended sender from
interfering with the reception. Therefore, no DATA packets are dropped due to the hidden
terminal problem.
Solution to the exposed terminal problem
In Fig. 7–1, B is the exposed terminal of D when D is transmitting DATA packet to
E. B could initiate RTS/CTS exchange with A though it can sense D’s transmission in the
DATA channel. After the RTS/CTS exchange is successful between B and A, B begins
to transmit DATA packet to A. Since A is out of the sensing range of D and E is out of
sensing range of B, both A and E could correctly receive the DATA packet destined to
them. Thus, the exposed terminal could transmit DATA packets in DUCHA which could
greatly enhance the spatial reuse ratio.
170
Solution to the receiver blocking problem
In Fig. 7–1, B is the blocked receiver in the IEEE 802.11 MAC protocol when D is
transmitting DATA packets to E. In our protocol DUCHA, B can correctly receive A’s RTS
in the control channel while D sends DATA packets in the DATA channel. Then B returns
NCTS to A because it senses busy medium in the DATA channel. The duration field of
NCTS contains the estimate for the remaining busy period in the DATA channel which
takes to finish D’s transmission. When A receives the NCTS, it defers its transmission and
stop the unnecessary retransmissions. It retries the transmission after the period indicated
in the duration field of NCTS. Once the RTS/CTS exchange is successful between A and B,
A begins to transmit DATA packet to B. B will correctly receive the DATA packet because
there is no hidden terminal problem for receiving DATA packets.
Improvement of spatial reuse
As discussed above, the exposed terminals could transmit DATA packets. Further-
more, in our protocol, the hidden terminal could receive DATA packets though it cannot
transmit. In Fig.7–1, D is the hidden terminal of A when A is transmitting DATA packet
to B. After the RTS/CTS exchange between E and D is successful in the control channel,
E could transmit DATA packets to D. Since D is out of A’s sensing range and B is out of
E’s sensing range, both D and E could correctly receive the intended DATA packets. Thus
DUCHA could greatly increase spatial reuse by allowing multiple transmitters or multiple
receivers in the sensing range of each other to communicate. At the same time, there are
no collisions for DATA packets as well as the NACK signals because there is only one
transmitter in its intended receiver’s sensing range and only one receiver in its intended
transmitter’s sensing range.
Inherent mechanism to solve the intra-flow contention problem
In our DUCHA protocol, the receiver of DATA packets have the highest priority to
access the channel for next DATA transmission. When one node correctly receives a DATA
packet, it could immediately start the backoff procedure for the new transmission while the
171
upstream and downstream nodes in its sensing range are prevented from transmitting DATA
packets during the NACK period. In fact, this could achieve optimum packet scheduling
for chain topology and it is similar for any single flow scenario.
For example, in Fig.7–2, node 1 has the highest priority to access the channel when
it receives one packet from node 0 and hence immediately forwards the packet to node 2.
For the same reason, node 2 immediately forwards the received packet to node 3. Then
node 3 forwards the received packet to node 4. Because node 0 can sense node 1 and 2’s
transmissions, it will not interfere with these two nodes. Node 0 could not send packets
to node 1 either when node 3 forwards packet to 4 because node 1 is in the interference
range of node 3. When node 4 forwards packet to 5, node 0 could have chance to send
packet to node 1. In general, nodes which are 4 hops away from each other along the path
could simultaneously send packets to their next hops. Thus the procedure could utilize 1/4
of the channel bandwidth, the maximum throughput which can be approached by the chain
topology [91].
7.4.4 Remarks on the proposed protocol
There is no collision for DATA packets in the proposed protocol because there is only
one DATA transmitter in the sensing range of any ongoing receiver in the DATA channel.
The out-of-band busy tone signal prevents any hidden nodes from initiating new DATA
transmission in the DATA channel.
There is no collision for NACK signal, i.e., the continuing busy tone, either, because
there is only one DATA receiver in the sensing range of any ongoing sender in the DATA
channel. After successful RTS/CTS exchange between the sender and its intended receiver,
all other nodes in the sensing range of the sender can sense its transmission in the DATA
channel and thus are restricted from becoming DATA receivers.
The control overhead could be reduced although we introduce a new NCTS packet
and a new NACK signal. First, NCTS is only transmitted when the intended receiver
cannot receive DATA packet. It can save a lot of unnecessary retransmitted RTS packets
172
as discussed in Section7.4.3. Second, NACK signal occurs only when the DATA packet is
corrupted due to channel fading, and hence its transmission frequency is also much smaller
than that of ACK packets in the 802.11 MAC protocol. Third, there is no collision for
DATA packets and hence the transmissions of RTS and CTS for corrupted DATA packets
are saved.
7.5 Performance Evaluation
7.5.1 Simulation Environments
We now evaluate the performance of our DUCHA protocol and compare it with the
IEEE 802.11 scheme. The simulation tool is one of the widely used network simulation
tools –ns2. The propagation model is the two-ray ground model. The transmission range
of each node is approximately 250m and the sensing/interference range is approximately
550m according to the default value of the received power threshold and the carrier sensing
threshold. Other default values of important parameters are shown in Table7–1.
Table 7–1:Default values in the simulations
Preamble of all kinds of packets192µsControl channel speed 0.3 MbpsData channel speed 1.7 MbpsDATA rate in 802.11 2.0 MbpsCapture threshold 10 dBLength of RTS 160 bitsLength of CTS 112 bitsLength of NCTS 112 bitsLength of ACK 112 bitsLength of NACK signal 150µsDATA Packet size 1000 Bytes
In our simulation study, several important performance metrics are evaluated, which
are described below:
• Aggregated end-to-end throughput– The sum of data packets delivered to the desti-
nations.
• Aggregated one-hop throughput– The sum of all the packets delivered to the des-
tinations multiplied by the hops they pass. This metric measures the total resource
173
240m 320m 240m
A B C D
Figure 7–4:One simple topology
efficiently utilized by the applications or the traffic. If all flows are one-hop flows,
this is the same as the aggregated end-to-end throughput, referred to as theaggre-
gated throughputin the figures.
• Transmission efficiency of DATA packets– The ratio of the aggregated one-hop through-
put to the number of the transmitted DATA packets. This metric reflects the resource
wasted by the collided DATA packets and the discarded DATA packets due to the
overflow of queue at the intermediate nodes of the path.
• Normalized control overhead– The ratio of all kinds of control packets including
RTS, CTS, NCTS and ACK to the aggregated one-hop throughput.
Thecollided DATA packetsand thediscarded DATA packetshave also been evaluated
in some cases. The collided DATA packets are those transmitted but corrupted by the
hidden terminals. The discarded DATA packets are those discarded due to continuous
failed retransmissions of RTS or DATA packets.
7.5.2 Simple Scenarios
To verify the correctness of our protocol, we first investigate one simple scenario
shown in Fig.7–4, where there are hidden terminals, exposed terminals and receiver block-
ing problems if IEEE 802.11 MAC protocol is used.
Hidden terminals
There are two flows with the same CBR traffic: flow 1 is from A to B and flow 2 is
from C to D. C is the hidden terminal of A and cannot sense A’s transmission or cannot
correctly receive B’s CTS. The ratio of the two distances, i.e.,dBC/dAB ≈ 1.33 < ∆c, so
C’s transmission will introduce collision at node B, which would affect B’s reception.
174
0 50 100 150 2000
10
20
30
40
50
60
70
Col
lided
DA
TA
Pac
kets
(pk
ts/s
ec)
Total offered load (pkts/sec)
Hidden Terminal Problem802.11DUCHA
(a)
0 0.8 1.6 2.4 3.20
0.5
1
1.5
2
2.5
Agg
rega
ted
Thr
ough
put
(Mbp
s)
Total offered load (Mbps)
802.11DUCHA
Exposed Terminal Problem
(b)
0 20 40 60 80 1000
5
10
15
20
25
30
Flo
w 1
: D
isca
rded
DA
TA
Pac
kets
(pk
ts/s
ec)
Offered load of flow 1 from A to B (pkts/sec)
Receiver Blocking Problem
802.11DUCHA
(c)
0 0.5 1 1.5 2 2.5 30
0.5
1
1.5
2
2.5
Agg
rega
ted
Thr
ough
put
(Mbp
s)
Total offered load (Mbps)
Maximum Spatial Reuse
802.11DUCHA
(d)
Figure 7–5:Simulation results for the simple topology
175
Fig. 7–5(a)shows that the number of collided DATA packets increases with the offered
load in IEEE 802.11 while our protocol has no collisions with the DATA packets. This in
fact verifies that there is no hidden terminal problem for the transmission of DATA packets
in our protocol. The reason is that B’s busy tone signal prevents the hidden terminal C from
transmitting and hence there is no collision at B and hence B can still receive A’s DATA
packets. However, in the IEEE 802.11 MAC protocol, C has no way to know that A is
transmitting DATA packets to B and hence cause collisions at B if C begins transmissions.
Exposed terminals
We now examine the exposed terminal problem. Assume that there are two flows with
the same CBR traffic: one is from B to A and another is from C to D. B and C are the
exposed terminals of each other. For example, B can sense C’s transmission but not D’s
transmission and B will not interfere with D’s reception.
In IEEE 802.11 MAC protocol, B and C cannot transmit DATA packets at the same
time while they can in our DUCHA protocol. So our protocol should have much higher
aggregated throughput in this simple scenario under heavy offered load. The improvement
is about 55% as shown in Fig.7–5(b).
Receiver blocking problem
The topology remains the same except C always has packets to transmit to D. When
C is transmitting to D, B is the blocked receiver. It cannot respond to A’s RTS, which will
lead to packet discarding.
Fig 7–5(c) shows that in IEEE 802.11 the sender A, whose intended receiver B is
blocked, cannot successfully transmit any packets. This is because that B could not cor-
rectly receive A’s RTS and thus A continuously discards DATA packets after multiple trans-
mission failures of RTS packets. While in our protocol DUCHA, the control packets are
transmitted in a separate channel and the blocked receiver could return an NCTS packet
to its intended sender during the period of neighboring DATA transmissions. Furthermore,
in our protocol, A can obtain a part of the bandwidth to transmit DATA packets while in
176
IEEE 802.11, A’s DATA transmissions will be corrupted by its hidden terminal C even if
the RTS-CTS exchange is successful between A and B.
Improvement of spatial reuse
Our DUCHA protocol could allow the hidden terminal to receive DATA packets as
well as to allow the exposed terminal to transmit DATA packets to improve the spatial
reuse. In the simulation, there are two flows with the same CBR traffic: flow 1 is from A
to B and flow 2 is from D to C.
Fig 7–5(d)shows that our protocol has much higher aggregated throughput than IEEE
802.11 MAC. The latter suffers not only from the poor spatial reuse but also from the
collisions among RTS, CTS, DATA and ACK packets since B and C are hidden terminals
of A and D, respectively.
Intra-flow contention
Our protocol DUCHA could mitigate the intra-flow contention as discussed in section
7.4. Fig. 7–6 shows the aggregated throughput of a 9-node chain topology. DUCHA
improves the maximum throughput by about 25% and has a 40% higher throughput than
IEEE 802.11 MAC under heavy offered load. This is because DUCHA has a large spatial
reuse ratio in the DATA channel and could achieve the optimum packet scheduling for
the chain topology independent of the traffic load while IEEE 802.11 MAC suffers from
collisions under heavy load.
7.5.3 Random Topology for One-hop Flows
In this simulation study, 60 nodes are randomly placed in a 1000m x 300m area. Each
node has the same CBR traffic and randomly selects one neighbor as the destination, which
is at least the minimum source-destination distance, i.e., 0, 100, 200 m, far apart. All results
are averaged over 30 random simulations.
We observe from Fig.7–7 that the aggregated throughput for all flows decreases
when the minimum source-destination distance increases. The aggregated throughput of
our protocol is higher than that of IEEE 802.11 MAC. And it degrades much slower in
177
0 0.1 0.2 0.3 0.4 0.5 0.60
0.05
0.1
0.15
0.2
0.25
0.3
0.35
End
-to-
End
Thr
ough
put
(Mbp
s)
Offered load (Mbps)
802.11DUCHA
Figure 7–6:End-to-end throughput for the 9-node chain topology
0 5 10 15 200
1
2
3
4
Agg
rega
ted
Thr
ough
put
(Mbp
s)
Total offered load (Mbps)
DUCHA-0m802.11-0mDUCHA-100m802.11-100mDUCHA-200m802.11-200m
Figure 7–7:Simulation results for random one-hop flows with different minimum one hopdistance
178
our protocol than in IEEE 802.11 MAC and it is improved by about 8% to 28% when the
minimum source-destination distance increases from 0m to 200m.
This is reasonable. For example, A and B are the source-destination pair. The larger
the distance between A and B, the larger the hidden area where nodes cannot sense A’s
transmission but can sense B’s transmission. So in IEEE 802.11 MAC, the hidden terminal
problem becomes more severe when the distance between A and B becomes larger. On
the other hand, in IEEE 802.11 MAC, all the nodes in the sensing range of A or B should
not transmit anything, i.e., both sensing ranges of A and B could not be reused by other
transmissions. However, in our protocol DUCHA, the exposed area, where nodes can
sense the sender’s transmission but not the receiver’s transmission, could be reused for new
senders, and the hidden area, where nodes can sense the receiver’s transmission but not
the sender’s transmission, could be reused for new receivers. Thus the larger the source-
destination distance is, the higher the system capacity our protocol DUCHA could obtain
than the IEEE 802.11 MAC.
In fact, most of the current routing algorithms maximize the distance between the
upstream node and the downstream node when selecting a path to reduce the hop-count,
the delay and the power consumption for delivering the packets from the source to the
destination. Our protocol DUCHA also gives a good solution to the intra-flow contention
problem and could achieve optimum packet scheduling for the chain topology.
7.5.4 Random Topology for Multihop Flows
In this simulation study, 60 nodes are randomly placed in a 1000m x 300m area.
The source of each flow randomly selects one node as the destination, which is with at
least certain minimum hops away, i.e., 3 or 5 hops. And there are total 20 flows with the
same CBR/UDP traffic in the network. We use pre-computed shortest path with no routing
overhead. All results are averaged over 30 random simulations.
179
0 0.8 1.6 2.4 3.20
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Agg
rega
ted
End
-to-
end
Thr
ough
put
(Mbp
s)
Total offered load (Mbps)
DUCHA-3 hops802.11-3 hopsDUCHA-5 hops802.11-5hops
(a)
0 0.8 1.6 2.4 3.20
0.2
0.4
0.6
0.8
1
1.2
Agg
rega
ted
One
-hop
Thr
ough
put
(Mbp
s)
Total ofered load (Mbps)
DUCHA-3 hops802.11-3 hopsDUCHA-5 hops802.11-5hops
(b)
0 0.8 1.6 2.4 3.20
0.2
0.4
0.6
0.8
1
Tra
nsm
issi
on E
ffic
ienc
y of
DA
TA
Pac
kets
Total offered load (Mbps)
DUCHA-3 hops802.11-3 hopsDUCHA-5 hops802.11-5hops
(c)
0 0.8 1.6 2.4 3.20
10
20
30
40
50
Nor
mal
ized
Con
trol
Ove
rhea
d
Total offered load (Mbps)
DUCHA-3 hops802.11-3 hopsDUCHA-5 hops802.11-5hops
(d)
Figure 7–8:Simulation results for multihop flows in random topology
180
Aggregated End-to-End Throughput
We observe from Fig.7–8(a)that when the minimum hop-count for each flow in-
creases, the aggregated end-to-end throughput of both protocols decreases. This is reason-
able because packets of multihop flows have to pass more links and thus consume more
resource for the same arriving traffic.
The throughput of IEEE 802.11 MAC reduces more dramatically than that of DUCHA
when the minimum hop-count for each flow increases. The improvement of throughput
comparing to the IEEE 802.11 MAC is about 2 and 5 times, respectively, for the scenarios
where the minimum of the hop-counts for all flows are 3 and 5.
Aggregated One-Hop Throughput
Our protocol DUCHA has much higher aggregated one-hop throughput than the IEEE
802.11 MAC as shown in Fig.7–8(b). This implies that DUCHA could effectively utilize
much more resource of the wireless ad hoc networks than IEEE 802.11 MAC does.
The resource efficiently utilized by the flows greatly decreases in IEEE 802.11 MAC
when the hop count of each flow increases, while our protocol DUCHA maintains a rela-
tively high resource utilization ratio for multihop flows with different hop counts. And our
protocol even efficiently utilize more resource when the hop count for each flow increases.
This implies that IEEE 802.11 MAC is not appropriate for multihop ad hoc networks while
our protocol DUCHA works well and is scalable for larger networks where the flows have
larger hop counts.
Transmission Efficiency of DATA Packets
The transmission efficiency of DATA packets in our protocol is 2 to 5 times higher
than that in the IEEE 802.11 MAC. And the longer the path is, the greater the improvement
of the transmission efficiency is, which can be observed in Fig.7–8(c).
In addition, our protocol maintains relatively stable transmission efficiency of DATA
packets for flows with different hop counts while the IEEE 802.11 MAC degrades signifi-
cantly when the hop count for each flow increases. The reason is that our protocol DUCHA
181
not only has no collided DATA packets, but also has much less accumulated and discarded
packets at the intermediate nodes along the paths. This means that our protocol could save
significant resource and lower the power consumption to deliver the same amount of DATA
packets.
Normalized Control Overhead
From Fig.7–8(d), we observe that the normalized control overhead is also much less
in our protocol than that in the IEEE 802.11 MAC. It linearly increases with the offered
load for the multihop flows in the IEEE 802.11 MAC while our protocol DUCHA maintains
a small stable value. Moreover, similar to the other performance metrics, the normalized
control overhead maintains a relatively stable value for flows with different hop counts in
our protocol DUCHA while in IEEE 802.11 MAC it becomes larger and larger when the
hop count for each flow increases. This implies that our protocol has much higher efficiency
to transmit DATA packets. And IEEE 802.11 MAC does not work well for multihop flows
especially under heavy load and will result in the “explosion” of control packets, leading
to more control packets and lower throughput.
7.6 Conclusions
This chapter first identifies the sources of dramatic performance degradation of IEEE
802.11 MAC in multihop ad hoc networks and then presents a new MAC protocol DUCHA
using dual channels, one is for control packets and the other is for DATA packets. Busy tone
signal is used to solve the hidden terminal problem and also used to transmit the negative
ACK (NACK) signal if necessary. Our protocol solves the hidden terminal problem, the
exposed terminal problem, the receiver blocking problem and also the intra-flow contention
problem and has much higher spatial reuse ratio than the IEEE 802.11 MAC. There are no
collisions for DATA packets and NACK signal and much less control packets and discarded
DATA packets. Our protocol uses the negative CTS (NCTS) to notify the sender that its
intended receiver is blocked and cannot receive DATA packets while IEEE 802.11 MAC
cannot distinguish it from unreachable destination. Thus, our protocol is more friendly to
182
the routing layer with much less unnecessary rerouting requests by providing more accurate
next-hop information.
Extensive simulations show that our protocol improves the throughput by 8%-28%
for one hop flows and by several times for the multihop flows when it uses the same to-
tal bandwidth as that of the IEEE 802.11 MAC. In addition, our protocol is scalable for
large networks and maintains high resource utilization ratio and stable normalized control
overhead while the IEEE 802.11 MAC does not work well for multihop flows under heavy
traffic.
CHAPTER 8A SINGLE-CHANNEL SOLUTION TO HIDDEN/EXPOSED TERMINAL
PROBLEMS IN WIRELESS AD HOC NETWORKS
In wireless multihop ad hoc networks, collision due to hidden terminal problem is
common and makes it difficult to provide the required quality of service for multimedia
services or support priority-based services. In this chapter, we first analyze the shortcom-
ings of existing approaches to alleviate the hidden terminal problem. Then we propose a
new scheme in which the receiver sends short busy advertisements over the same channel
to clear the floor for receiving. Carrier sensing range is set as small as an interference
range to alleviate the exposed terminal problem. The new scheme only requires a single
transceiver and a single channel. We analyze and evaluate the performance of the proposed
scheme extensively. The results show that the new scheme has a much higher efficiency
than the existing approaches using a single channel and a single transceiver.
8.1 Introduction
Wireless multihop ad hoc networks, such as mobile ad hoc networks, sensor networks,
and wireless mesh networks, have attracted a lot of attention these years because they can
support many applications in daily life as well as in military communication. In many sce-
narios, these networks are required to support a certain bandwidth and delay requirement
or provide a high priority for some important services. However, packet loss due to hidden
terminal problem may lead to an unacceptable quality of these services. Therefore, solving
the hidden terminal problem is a must to support these services.
In the hidden terminal problem, packet collision happens at the intended receiver if
there is a transmission from a hidden terminal. Here, a hidden terminal is a node that cannot
sense the ongoing transmission but is able to introduce enough interference to corrupt the
receiving if it transmits. For example in Fig.8–1, there is an ongoing transmission from A
183
184
A BC
D
rt: transmission radiusri: interference radius
rcs: carrier sensing radius
rcs
rirt
Figure 8–1:Hidden terminal problem
to B. C is a hidden terminal of A and may transmit during the ongoing transmission from
A, which leads to collision at B. Because C does not know whether A is transmitting or
not, it can occupy the channel at any time and the quality of the flow from A to B cannot
be guaranteed whenever there are any packets from C to D. We will illustrate more details
of the related carrier sensing, transmission and interference ranges in Section8.2.
A widely studied solution to the hidden terminal problem is the out-of-band busy tone
approach ([122, 59, 133, 165, 163, 161, 153] and references therein). Receiver sends out
the busy tone signal on the busy tone channel while receiving DATA packets on the DATA
channel. All nodes in the network are required to monitor the busy tone channel. If a
node overhears the busy tone signal, it must keep silent to avoid possible collision. This
approach can well address the hidden terminal problem, but it requires both an additional
channel and an additional transceiver.
Several approaches ([138, 142, 161, 153, 155] and references therein) have also been
proposed to address the hidden terminal problem without the requirement of the additional
channel and transceiver. A common approach is to use a large carrier sensing range (LCS)
to cover the interference range around the receiver, as shown in the left part of Fig.8–2.
If there is no obstruct in between, all the nodes whose transmissions can interfere with the
packet reception can sense the transmission from the transmitter and hence are required
to keep silent to avoid collision. However, this approach decreases the spatial reuse ratio
185
A B
C
A B
C
LCS SBA-MAC
Figure 8–2:Carrier sensing range and interference range in LCS and SBA-MAC
by silencing a lot of nodes that are out of the interference range of the receiver and do not
interfere with the ongoing transmission and reception if they transmit. Furthermore, it does
not completely solve the hidden terminal problem if there is obstruct between the nodes.
For example, in Fig.8–2, node C cannot sense the transmission from A for an obstruct in
between and is still a hidden terminal. A variation of the approach is to maintain the same
carrier sensing range but to reduce the transmission range by enforcing a higher power
threshold for packet reception [138]. The basic idea is still to cover the interference range
of the receiver within the carrier sensing range of the transmitter. It shares the same spatial
reuse ratio and does not address the hidden terminal problem either when obstruction exists.
To address the shortcomings of the above approaches, a hidden terminal has to defer
their transmissions according to a received or sensed signal/packet from the current receiver
on the same channel for DATA transmission. Fullmer and Garcia-Luna-Aceves proposed a
scheme FAMA in the paper [47] to use a “CTS dominance” mechanism to ensure collision-
free data packet reception. This mechanism requires nodes sensing any noise to defer
their transmission long enough for a maximum-length data packet to be received. It will
mistakenly treat collisions and any undecodable transmissions of frames other than CTS to
be “CTS dominancee” and then waste the channel in the long deferring time. Yeh proposed
CSMA/IB in the paper [144] to require the receiver to transmit a short signal or inband
busytone between the received data fragments. Any nodes overhearing the signal have to
186
defer their transmissions for a duration equal to the transmission time for a maximum data
fragment. Compared to FAMA, CSMA/IB can reduce the deferring time significantly if
the length of a maximum data fragment is much less than a maximum-length data packet.
However, busy tone periods increase the total transmission time of a data packet. Data
fragments also introduce more control overhead, like the physical and MAC layer headers
[68]. The performance of CSMA/IB has not been well evaluated. How to set the length of
busy tone signal and maximum data fragment, and what is their impact on the performance
deserve careful studies.
In this chapter [159], we propose a new MAC scheme using dummy bits and short
busy advertisement (SBA) signals based on the CSMA/CA (carrier sense multiple ac-
cess with collision avoidance) or the IEEE 802.11 MAC scheme. In the basic SBA-MAC
scheme, several short periods of dummy bits are inserted in the DATA frame. During these
periods, the receiver switches to the transmission mode and transmit a short busy adver-
tisement consisting of synchronization symbols, and then switches back to continue the
packet reception. A node defers its transmission for a BIFS (interframe spacing due to
busy advertisement) period after detecting a SBA signal or any noise. The above SBA pro-
cedure is only used when a hidden terminal is detected and the normal 802.11 operation
is used otherwise. In addition, the transmission power of busy advertisement is controlled
to improve spatial reuse ratio while still solving the hidden terminal problem. We analyze
the performance of the SBA-MAC scheme and study the impact of the length of BIFS on
the system performance. The results show that SBA-MAC can significantly outperform the
previous approaches using a single channel. The features and advantages of SBA-MAC are
summarized as follows.
• SBA-MAC only needs a single channel and requires a single transceiver at each node
to ensure collision-free data packet reception.
187
• Deferring time due to undecodable interference is decreased to BIFS (close or equal
to EIFS) instead of the transmission time of a maximum-length data packet in FAMA.
Thus transmission efficiency is greatly increased.
• Carrier sense range is set as small as the interference range, i.e. it is only large
enough to protect the reception of ACK frame at the transmitter. In this way, exposed
terminal problem is greatly mitigated and spatial reuse is greatly improved.
• A very short busy advertisement signal is used to address the hidden terminal prob-
lem. To notify the hidden terminals of the current transmission, a receiver sends out
this signal on the same channel of data transmission instead of a continuous busy tone
signal on another channel in the out-of-band busy tone solution. Thus no additional
channel is required.
• The receiver only transmits a busy advertisement signal during the transmission pe-
riod of dummy bits which are ignored by the receiver. Thus a single transceiver is
enough.
• Each part of a data frame divided by dummy bits does not need to include a whole
physical layer header and a MAC layer header to allow the receiver synchronized
with the transmitted frame. Thus protocol overhead added in the frames is controlled
at a minimum level.
• In SBA-MAC, power control is used for both data frames and busy advertisement
signals. Spatial reuse ratio is greatly increased compared to the approaches only
controlling the transmission power of data frame.
The rest of this chapter is organized as follows. Section8.2 illustrates various ranges
in carrier sense MAC protocols. The SBA-MAC scheme is proposed in Section8.3. In
Section8.4, we study how to control transmission power to further increase spatial reuse
ratio. Then we analyze and evaluate the performance of the proposed scheme in Section
8.5. Finally, we conclude the chapter in Section8.6.
188
8.2 Various Ranges in Wireless Multihop Ad Hoc Networks
In this section, we illustrate the definitions of carrier sensing range, transmission or
communication range, and interference range. Notice that all these ranges are determined
by some thresholds of the power level of the received or sensed signal.
Communication rangesometimes is also called astransmission range. It indicates an
area around one node where all other nodes can correctly receive the packet transmitted by
this node if there is no other interference signals. It is determined by thereceiver sensitivity
Pse. At the edge of the range, the received signal from the node in the center has a power
level equal to the receiver sensitivity.
Carrier sensing rangeindicates an area around one node where all other nodes can
sense the transmission from this node. It is determined by the carrier sensing threshold
Pcs. At the edge of the range, the received signal from the node in the center has a power
level equal to the carrier sensing threshold. Any node other than the intended receiver in
this range which sense the transmission from the node at the center of the range indicates
a busy channel to the MAC layer and hence is required not to transmit during the busy
period.
Interference rangeindicates an area around one node where another transmission can
interfere with receiving at this node so that it fails to receive the packet from the intended
transmitter. This range is determined by both received power of the intended packet and
the signal to interference plus noise ratio (SNR). If there is no other transmission in this
range and interference only comes from a transmission outside of this range, the receiver
should has a SNR larger than the requirement for correct reception. The larger the received
power of the intended signal is, the smaller the range is. We refer to the range as atypical
interference rangewhen the received power is equal to receiver sensitivity. Apparently, the
typical interference range is larger than the interference range for a received power larger
than the receiver sensitivity. The interference range can be determined by the maximum
allowable interference powerPi (or P ∗i for the typical interference range). Wherever the
189
received interference power from one node is larger thanPi, that node is in the interference
range of the considered node; and
Pi =Pr
SNR, P ∗
i =Pse
SNR(8.1)
wherePr is the received power of the intended packet at the current receiver. If we assume
that every node uses the same transmission power, we have
di = dh(Pr
Pi
)1γ , d∗i = dt(
Pse
P ∗i
)1γ (8.2)
wheredi is the radius of the interference range,dh is the distance between the current
transmitter and its intended receiver or simply hop distance of the current hop/link,d∗i
is the radius of the typical interference range,γ is the path loss exponent, anddt is the
maximum communication distance.
Notice that two or more concurrent interference signals may exist. The interference
range defined byPi in Equation (8.1) is not large enough to cover nodes which may corrupt
the packet reception. Normally ([155]), Pi andP ∗i should be 2 or 3 dB smaller to address
this issue.
SNR is usually required to be larger than0dB for correct reception. Therefore,
P ∗i > Pse and typical interference range is larger than communication range. In the default
settings in the widely used simulation tool ns2, carrier sensing radius is 2.2 times of trans-
mission radius, and interference radius is about 1.78 times of transmission radius when
capture threshold is set as 10dB.
8.3 Addressing the Hidden/Exposed Terminal Problems with Short BusyAdvertisement Signal
In this section, we introduce the SBA-MAC scheme. We first explain the basic op-
erations at a transmitter, a receiver and other neighboring nodes. Then we study how to
190
RTS
CTS
DATA1 DATA2 DATA3 DATA4
ACK
TBATRT TTR
SIFS SIFS SIFS
TIDFS
IDFS IDFS IDFS
IDFS: intra-data-frame spacingor inter-data-fragment spacing
BA: busy advertisement
BA BA BA
Header
PH
YM
AC
Figure 8–3:Four-way handshake with busy advertisement signals
construct a short busy advertisement signal and how to set various parameters in the SBA-
MAC scheme. Finally, we discuss why this scheme can greatly increase the spatial reuse
ratio and the compatibility issue with the legacy 802.11 nodes.
8.3.1 Basic Operations in the SBA Procedure
The SBA procedure is only used when a possible hidden terminal exists. We will
discuss how to determine whether there is a hidden terminal problem and when to start and
stop the SBA procedure in Section8.3.7.
When the SBA procedure is adopted, a transmitter divides the payload of the DATA
frame into several parts or fragments and inserts a small block of bits between two adjacent
parts. These bits are dummy bits and can be equal to any values. A transmission period of
these dummy bits is referred to asan intra-data-frame spacingor an inter-data-fragment
spacing(IDFS).
During an IDFS period, the intended receiver ignores the received signal, sends out
a busy advertisement signalover the same channel to notify the hidden terminals of the
ongoing transmission, and then switches back to continue the packet reception. The corre-
sponding message sequence is shown in Fig.8–3.
To protect the data fragments, any device sensing the busy advertisement signal in the
typical interference range (i.e. the sensed power is larger thanP ∗i , refer to Section8.2)
should keep silent for a certain period. We refer to this period as a BIFS period, oran
191
interframe spacing due to a busy advertisement signal. Apparently, to guarantee an error-
free reception of the data fragment, BIFS should be large enough for a maximum-length
data fragment to be received. Since the busy advertisement is also subject to collision and
hence becomes an undecodable signal to some nodes, a node sensing any undecodable
signal with a power larger thanP ∗i is also required to defer its transmission for at least a
BIFS period.
8.3.2 Mitigating Exposed Terminal Problem by Adjusting Carrier Sensing Range
In the proposed scheme, we set the carrier sensing range with the same size as that of
the typical interference range (refer to Section8.2) as shown in the right part of Fig.8–2,
i.e.
Pcs = P ∗i (8.3)
Any node senses a signal with a power level larger than the typical interference powerP ∗i
should indicate a busy channel and defer its transmission during the busy period. If the
signal is undecodable or a busy advertisement signal, it is further required to keep silent for
at least a BIFS period after the signal is finished. If the signal is a correctly received MAC
frame, it keeps silent during the period indicated in the duration field in the frame using the
original virtual carrier sensing mechanism.
In this way, a transmitter only silence the nodes that it may interfere with and that may
interfere with its reception of an ACK frame. A busy advertisement signal only silences
those hidden terminals which can interfere with the current packet reception. In this chap-
ter we also refer to the interference range around a receiver, which is silenced by a busy
advertisement, as abusy advertisement rangehereafter. Therefore, our approach allows
more concurrent transmissions and hence can significantly increase the spatial reuse ratio
compared to the approach using a large carrier sensing range as shown in Fig.8–2.
8.3.3 Parameters in SBA Procedure
Let TIDFS denote the length of an IDFS period.TIDFS must be large enough for the
receiver to switch from receiving to transmitting, to send out a busy advertisement signal,
192
and to switch back for receiving.
TIDFS = TRT + TBA + TTR
TRT 6 TSIFS
TBA > TaCCATime
TTR 6 TSIFS
(8.4)
TRT is the time that the MAC and PHY (physical layer) require to switch from receiving
to transmitting.TBA is the time that the device requires to send out a busy advertisement
which is long enough for other devices to sense.TTR is the time that the MAC and PHY
require to switch from transmitting to receiving. As defined in the IEEE 802.11 standards,
TaCCATime is the minimum time for the CCA (clear channel assessment) mechanism to
assess the medium and to determine whether the medium is busy or idle. A short interframe
space (SIFS), also defined in the IEEE 802.11 standards, is long enough for aTRT or aTTR
between an incoming frame and an outgoing frame or vice versa. Since the receiver does
not need any response from other devices during an IDFS period, which is required in
SIFS in the original four-way handshake, it is possible forTRT or TTR to be less than
SIFS, which depends on implementation of the physical layer [68, 71, 69, 70].
Let TBIFS be the length of a BIFS period. Notice that an EIFS procedure is already
adopted in the IEEE 802.11 MAC protocol. A node is required to keep silent for at least an
EIFS period after it detects an undecodable signal. The EIFS period is used to protect the
reception of an ACK frame. To provide the same function, the BIFS procedure replaces the
original EIFS procedure andTBIFS should be larger than or equal toTEIFS. Since a node
only knows that it is the intended receiver after receiving the physical and MAC headers,
BIFS should be large enough to protect the reception of these headers. So
TBIFS > max(TEIFS, TPHY + TMAC + TSIFS + 2Tprop + TRT ) (8.5)
whereTPHY andTMAC are the transmission time for the physical header and the MAC
header, respectively.Tprop is the maximum propagation delay between two communicating
193
DATA1 DATA2 DATA3 DATA4
Header
PH
YM
AC
≤TBIFS-TSIFS-2Tprop TBIFS-TTR-TRTTBIFS-TTRTBIFS-TTR-TRT
IDFS IDFS IDFS
DATA1 DATA2 DATA3
HeaderP
HY
MA
C
TBIFS-TTR-TRTTBIFS-TTR≤TBIFS-TTR-TRTIDFS IDFS IDFS
If RTS/CTS are used.
If RTS/CTS are not used.
Figure 8–4:Positions of IDFS periods in the DATA frame
nodes. On the other hand,TBIFS must be larger than or equal to the maximum transmission
time of a data fragment. Notice that between two consecutive busy advertisement signals,
the receiver needs to consume timeTTR andTRT besides the time for reception of a DATA
fragment. Therefore, to keep a hidden terminal silent, the transmission time for a DATA
fragmentTfrag must satify
Tfrag 6 TBIFS − TRT − TTR (8.6)
if it is not the last fragment, and
Tfrag 6 TBIFS − TTR (8.7)
otherwise. Accordingly, a transmitter divides a data frame into one or more fragment and
places some dummy bits in between through the following method.
8.3.4 Positions of IDFS Periods in the DATA Frame
The transmitter puts one or more IDFSs in the DATA frame according to the frame’s
transmission time. The receiver determines the positions of IDFS periods according to the
duration field in the MAC header of the DATA frame. LetTDATA be the transmission time
of the original DATA frame before adding IDFS periods.
If RTS/CTS are used, a transmitter uses the following procedure to place IDFS periods
in the DATA frame.
194
• If TDATA 6 TBIFS −TSIFS − 2Tprop, it is not necessary to put an IDFS in the DATA
frame.
• If TBIFS−TSIFS−2Tprop < TDATA 6 TBIFS−TSIFS−2Tprop−TRT +TBIFS−TTR,
place one IDFS as far as possible from the end of the DATA frame, but not more than
TBIFS − TTR far away from the end of the DATA frame and must be after the MAC
layer header.
• If TDATA > TBIFS − TSIFS − 2Tprop − TRT + TBIFS − TTR, two or more IDFS
periods should be placed in the DATA frame. The first IDFS period must be placed
at a position after the MAC header. The first fragment including the PHY and MAC
headers must be less than or equal toTEIFS − TSIFS − 2Tprop − TRT . The second
fragment has a length less than or equal to but as close as possible toTBIFS −TTR−TRT . The last data fragment lasts for a period ofTBIFS − TTR. All other fragments
have a length equal toTBIFS−TTR−TRT . Fig. 8–4shows such a DATA frame with
four fragments.
If RTS/CTS are not used, a transmitter uses the following procedure to place IDFS periods
in the DATA frame.
• Place the first IDFS period immediately after the MAC layer header.
• If TDATA − TPHY − TMAC > TBIFS − TTR, two or more IDFS periods should be
placed in the DATA frame. The first one is placed immediately after the MAC layer
header. The last one is placed at the position where the remaining transmission time
of the data time is equal toTBIFS − TTR. The length of the data fragment after the
first IDFS period is less than or equal toTBIFS − TTR − TRT , and any subsequent
data fragments other than the last one have a length equal toTBIFS − TTR − TRT .
Fig. 8–4shows such a DATA frame with four fragments.
Above procedure attempts to transmit the last busy advertisement at a time as earlier
as possible than the end of the transmission, and still late enough to protect the receiving
of the last data fragment. With an appropriate value ofTBIFS, it is rare that a device is
195
required to keep silent after the ongoing transmission is finished due to one sensed busy
advertisement. We will study how to set the value ofTBIFS in Section8.5.
8.3.5 Busy Advertisement Signal
The busy advertisement signal can be the training symbols in a physical layer pream-
ble, as long as it is longer enough for other devices to sense, i.e. larger than or equal to
TaCCATime. Since the preamble is transmitted with the lowest basic rate, it is much easier
to detect it than many other signals. The busy advertisement signal can also be any other
well defined signals to facilitate the detection and differentiation of it from other signals.
8.3.6 Power Control for Short Busy Advertisement
As discussed in Section8.2, interference range changes with the received power of the
intended signal. On the other hand, the transmission power of busy advertisement controls
the size of thereserved areaaround the receiver where nodes defer their transmission if
overhearing a busy advertisement. Therefore, we can adjust the transmission power of
busy advertisement to obtain a reserved area equal to the area of the interference range,
which is normally smaller than a typical interference range. We will derive the appropriate
transmission power of a short busy advertisement signal as follows.
In SBA-MAC, carrier sensing thresholdPcs is also used to determine the edge of the
interference range. That is to say, when the sensed power of a busy advertisement signal
is less thanPcs, a node determines that it is outside of the interference range, otherwise in
the range. LetPt denote the transmission power of busy advertisement, which results in
a typical interference range. SupposePt is also used to transmit other MAC frames like
the DATA frame. Pr denotes the received power of a DATA frame at the receiver.Pse
denotes the power defined by the receiver sensitivity.P ′tba denotes the transmission power
of a busy advertisement signal that defines an interference range subject toPr. P ′i denotes
the received power at the considered receiver due to another transmission at the edge of the
interference range. ThenP ′
tba
Pcs
=Pt
P ′i
,Pr
P ′i
=Pse
Pcs
(8.8)
196
The first equation in Equation (8.8) comes from the fact that the path loss from the consid-
ered receiver to a node at the edge of the reserved area is equal to that from a node at the
edge of the reserved area to the considered receiver. The second equation in Equation (8.8)
indicates that SNR should not be sacrificed due to a small reserved area compared to the
typical interference range forPr = Pse. Then
P ′tba = Pcs
Pt
P ′i
= PtPse
Pr
(8.9)
Let dh denote the distance between the transmitter and its intended receiver,dt denote the
maximum transmission distance defined by the receiver sensitivity,db denote the radius of
the typical interference range,d′b denote the radius of the interference range.
(d′bdb
)γ
=Pcs
P ′i
=Pse
Pr
=
(dh
dt
)γ
⇒ d′b =dh
dt
db (8.10)
Therefore, by decreasing the transmission power of busy advertisement fromPt to P ′tba,
the busy advertisement range is reduce to the interference range and may be much smaller
than the typical interference range. In this way, more concurrent transmissions are allowed
in the network and spatial reuse ratio is increased.
When the carrier sensing range around the transmitter covers the whole area of the
interference range around the receiver (or busy advertisement range), i.e.,
dh + d′b 6 dcs = db ⇒ dh 6 dbdt
dt + db
(8.11)
we can choose not to send busy advertisement to reduce overhead if we know there is no
obstruct between the transmitter and nodes in the interference range of the receiver.
8.3.7 Start and Stop SBA Procedure
One reserved bit in the MAC header of a DATA frame and an ACK frame [68] is used
to indicate whether or not to start the SBA procedure. We refer to this bit as an SBA-bit.
If an SBA-bit is set as one in a DATA frame, it means that SBA procedure is used, and not
197
otherwise. If an SBA-bit is set in an ACK frame, it means that SBA procedure should be
used in the next DATA frame to this receiver, and not otherwise.
Normally, SBA procedure is disabled. Once there is no acknowledgement for a sent
DATA frame, the transmitter assumes there is a hidden terminal for the current transmission
and adopts SBA procedure for subsequent (re)transmissions to the current receiver. When
a receiver first receives a DATA frame with SBA-bit set as one, it will set the SBA-bit
as one in all the responding ACK frames from then on. Only after a certain time, during
which there is no sensed undecodable signal, it assumes that there is no hidden terminal
and should disable the SBA-bit in responding ACK frame. When a transmitter receives
an ACK frame with the SBA-bit not set as one, it stops the SBA procedure if used for the
corresponding receiver.
To differentiate errors in a DATA frame due to interference from a hidden terminal
and random channel bit error, a new type of MAC frame, say NACK (negative ACK), is
used. As long as the measured SNR at the physical layer is larger than the nominal SNR
requirement, a node assumes that there is no hidden terminal and return an NACK if there
is an error in the received DATA frame. Otherwise, a node does not respond anything to
the transmitter for a received erroneous DATA frame. The procedure to set the SBA-bit
in an NACK frame is the same as that for an ACK frame. When the transmitter receives
an NACK frame, it assumes there is an error in the transmission and the error results from
a random channel bit error. It adopts the SBA procedure for subsequent transmissions of
fragments according to the SBA-bit in the NACK frame.
In the worst case where the noise floor is very high and hence channel bit error happens
frequently, SBA procedure will be adopted for all DATA frame transmissions. Although it
noticeably increases the overhead or the transmission time of a DATA frame, the improve-
ment due to increased spatial reuse ratio as discussed in Section8.3.2is still high enough to
account for it. We will study the performance by considering these two factors in Section
8.5.
198
8.3.8 Synchronization Issue
When the receiver receives the first DATA fragment, it is synchronized with the phys-
ical layer preamble of the fragment. It needs to keep the synchronized clock information
or store it somehow duringIDFS, and uses it to decode subsequent data fragments after
an IDFS period. This clock signal can be used to send out the busy advertisement signal as
long as it is still synchronized with the received signal when it switches back for receiving.
This means that the subsequent data fragments do not need to carry a physical and MAC
layer header like at the head of the whole data frame. The transmitter can also choose to
insert a short period of training symbols before each fragment to facilitate the receiver to
be synchronized again after the transmission of a busy advertisement signal. All the data
fragments share with the same information in the physical and MAC layer headers at the
beginning of the data frame.
A transmitter can choose not to send the dummy bits during IDFS periods to save
a little energy as long as the silent periods do not result in the loss of synchronization
information for the subsequent data fragments to be received at the receiver.
8.3.9 Accumulative Acknowledgement
An ACK frame can also be changed to an accumulative acknowledgement. Several
bits are added in the frame to indicate which fragments are correctly received and which
not. For this purpose, each fragment should includes a CRC filed for the receiver to check
its correctness. After receiving an accumulative ACK frame, a transmitter only needs to
retransmit the erroneous fragments. A receiver should allocate a buffer to store the correct
fragments and wait for retransmission of corrupted fragments before defragmentation.
Accumulative acknowledgement is specially useful when the channel bit error ratio
is high. The whole data frame may have a high probability to have an error. However,
each data fragment has a much higher probability to be correctly received. Therefore,
accumulative acknowledgement can save a lot of retransmission cost in this case.
199
8.3.10 CTS Dominance
A hidden terminal may transmit a RTS frame at the same time when the current re-
ceiver transmits a CTS frame. Since the length of a RTS frame is larger than that of a CTS
frame [68], the head part of the data frame may collide with the tail part of the RTS frame.
To avoid this type of hidden terminal problem and ensure an error-free data reception, a
new CTS frame longer than a RTS frame should be used like that in the FAMA scheme
[47]. Thus in the above scenario, the hidden terminal can sense the tail part of a CTS frame
and hence defer its own transmission according to the carrier sensing mechanism.
8.3.11 Compatibility with Legacy 802.11 MAC Scheme
A node using SBA-MAC is allowed to adopt the original 802.11 MAC scheme to
communicate with a legacy 802.11 node that does not use SBA-MAC. However, the data
reception is not protected from the hidden terminal problem for the legacy 802.11 nodes. If
BIFS is set equal to EIFS, hidden terminal problem is still well addressed for a SBA-MAC
node. If BIFS is longer than EIFS, a legacy 802.11 node can become a hidden terminal of
a SBA-MAC node. However, the data reception at a SB-MAC node is still better protected
than that at a legacy 802.11 node because the later will treat a busy advertisement signal as
an undecodable signal and defer an EIFS period. In addition, if BIFS is larger than EIFS,
a SBA-MAC node has a lower probability to successfully contend for the channel than a
legacy 802.11 node when both overhears an undecodable signal because the former one
defers longer than the latter.
8.4 Maximize Spatial Reuse Ratio and Minimize Power Consumption by PowerControl
In this subsection, we focus on how to control the transmission power of data frames
to maximize the spatial reuse ratio and minimize the total power consumption while still
addressing the hidden/exposed terminal problem. Both SBA-MAC and the approach using
a large carrier sensing range will be studied to provide a fair comparison of two schemes
under power control.
200
A BA Bα
td
csd bdcsd
A B1α 2α
hd
'bd
α
'csd
No power controlPower control of busy
advertisementPower control of DATA frame
and busy advertisement
csd csd
hd
'bd
'csd
1α 2αA B
1α 2α
hd
bd
Power control of DATA frame
'csd
'csd
Figure 8–5:Power control in SBA-MAC
8.4.1 Power Control for Both DATA Frame and Busy Advertisement in SBA-MAC
If we allow the transmission power of a DATA frame to be adjusted, the spatial reuse
ratio can be further increased by reducing the carrier sensing range around the transmitter.
We need to determine the transmission powerP ′t andP ′
tba for both DATA frame and busy
advertisement, respectively.
In a typical scenario,dh = dt. The transmission power of DATA frame isPt, the
receiver power at the receiver isPse, carrier sensing threshold isPcs, carrier sensing radius
is dcs, busy advertisement threshold isPba = Pcs = P ∗i , radius of typical interference range
is db = dcs = d∗i . Transmission power of a busy advertisement signal is alsoPt.
When the transmitter-receiver distancedh < dt, let Pr denote the receiving power at
the receiver when transmission power isPt. When the transmission power of a DATA frame
and a busy advertisement signal are reduced toP ′t andP ′
tba, received power isP ′r, carrier
sensing radius isd′cs, radius of busy advertisement range isd′b. When there is another node
transmitting at the edge of the busy advertisement range with powerPt, the interference
power received at the considered receiver isP ′i . To maintain the same signal to interference
ratio as that in the above typical scenario,
P ′r
P ′i
=Pse
Pcs
(8.12)
Transmission power of a DATA frame changes fromPt to P ′t , however sensed power at
the edge of carrier sensing range does not change. With the same transmission powerPt,
201
sensed power atdb andd′b away arePcs andP ′i , respectively. These lead to
Pt
P ′t
=
(dcs
d′cs
)γ
,Pcs
P ′i
=
(d′bdb
)γ
(8.13)
Because path loss between a pair of two locations does not change with the transmission
power, we havePt
Pr
=P ′
t
P ′r
,P ′
tba
Pcs
=Pt
P ′i
(8.14)
According to Equation (8.13)
d′cs = xdcs, x =
(P ′
t
Pt
) 1γ
(8.15)
Notice that with the same transmission powerPt, the received power arePse andPr at
distancedt anddh, respectively,Pr
Pse
=
(dt
dh
)γ
(8.16)
and according to Equation (8.12), (8.13) and (8.14), we have
d′b = db
(PsePt
PrP ′t
) 1γ
= dbdh
dt
1
x= dcs
dh
dt
1
x(8.17)
BecauseP ′t 6 Pt andP ′
tba 6 Pt, d′cs 6 dcs andd′b 6 db. So
dh
dt
6 x 6 1 (8.18)
Apparently, whenx = 1, there is only power control for busy advertisement,d′cs = dcs and
d′b = dcsdh
dtthat is shown in Equation (8.10); and whenx = dh
dt, there is only power control
for DATA frame,d′cs = dcsdh
dtandd′b = dcs.
To increase the spatial reuse, we need minimize the areaS(d′cs, d′b) covered by the
carrier sensing range around the transmitter and the busy advertisement range around the
202
receiver. It is easy to show thatS(r1, r2) is defined by
α1 = arccos(r21+d2
h−r22
2r1dh), α2 = arccos(
r22+d2
h−r21
2r2dh)
S(r1, r2) =
(π − α1)r21 + (π − α2)r
22 + r1dh sin α1,
(r1 − dh < r2 < r1 + dh, r1 + r2 > dh)
πr21, (r2 6 r1 − dh)
πr22, (r1 6 r2 − dh)
(8.19)
Now the problem becomes to minimizeS(d′cs, d′b) under conditions Equation (8.15), (8.17)
and (8.18). That is to say
min{S(xdcs, dcsdh
dt
1x)}
subject to:dh
dt6 x 6 1
(8.20)
It is not difficult to prove that whend′cs = d′b = dcs
√dh
dt= d∗i
√dh
dt, S(d′cs, d
′b) obtains
the minimum valueS(d∗i√
dh
dt, d∗i
√dh
dt), and the total powerP ′
t + P ′tba is also minimized:
P ′t = P ′
tba = Pt
(dh
dt
) γ2
= Pt
√Pse
Pr
(8.21)
8.4.2 Power Control for the Approach Using A Large Carrier Sensing Range
Let Pr be the received power at the considered receiver when the transmission power
is Pt. In the typical scenario where the transmitter-receiver distance isdt, Pr = Pse, the
transmission power of a DATA frame isPt, andPcs defines a carrier sensing range around
the transmitter just covering the typical interference range around the receiver. Apparently
Pcs > P ∗i , and
dcs = dt + d∗i (8.22)
If Pr > Pse, the transmission power of DATA frame can be reduced. When transmis-
sion power isP ′t , carrier sensing range with a radius ofd′cs around the transmitter should
still just cover the interference range with a radius ofd′i around the receiver.Pcs is still the
203
sensed power at the edge of the reduced carrier sensing range.
d′cs = dh + d′i (8.23)
Let γ be the path loss exponent. The transmission power is reduced fromPt to P ′t , so
the radius of the carrier sensing range is also reduced fromdcs to d′cs. Because the sensed
power at the edge of the carrier sensing range does not change and is still equal toPcs, we
haveP ′
t
Pt
=
(d′csdcs
)γ
(8.24)
If another node transmits at the edge of the interference range around the receiver with
powerPt, the power of the interference received at the considered receiver isP ′i and satis-
fiesPr
P ′i
=
(d′idh
)γ
(8.25)
With a smaller carrier sensing range and a smaller interference range, we should still main-
tain at least the same signal to interference ratio as that in the typical scenario:
P ′r
P ′i
=Pse
Pi
=
(d∗idt
)γ
(8.26)
Though the transmission power is reduced, the path loss should be the same:
Pt
Pr
=P ′
t
P ′r
(8.27)
According to Equation (8.24), (8.25), (8.26) and (8.27),
(d′idh
d′csdcs
)γ
=Pr
P ′i
P ′t
Pt
=P ′
r
P ′i
=
(d∗idt
)γ
(8.28)
Replacingd′i with dcs′ − dh, we can solved′cs by noticing thatd′cs > 0:
d′cs =dh +
√d2
h + 4d∗i (d∗i + dt)dh/dt
2(8.29)
204
Whendh = dt, from Equation (8.29) and (8.22), d′cs = dcs which is the desired result for
the typical case. Now with the knowledge ofd′cs, we can determine the new transmission
powerP ′t by Equation (8.24).
8.5 Performance Analysis
8.5.1 Spatial Reuse Ratio
Due to carrier sensing requirement, there is a certain area around the transmitter and
the receiver where no other communication is allowed. In SBA-MAC, this areaSba consists
of the carrier sensing range around the transmitter and the busy advertisement range around
the receiver.
Sba = S(d∗i ,dh
dt
d∗i ) (8.30)
When power control is used for data frames,
Sba = S(d∗i
√dh
dt
, d∗i
√dh
dt
) (8.31)
In the approach using a large carrier sensing range as shown in the left side of Fig.
8–2, the area occupied by each transmission isSlcs and
Slcs = πd2cs = π(dt + d∗i )
2 (8.32)
When power control is used for data frames,
Slcs = π(dh +
√d2
h + 4d∗i (d∗i + dt)dh/dt
2)2 (8.33)
8.5.2 Protocol Overhead
In this subsection, we only discuss four-way handshake with RTS/CTS. Similar analy-
sis can be applied to two-way handshake without RTS/CTS.
In SBA-MAC, the transmission time of a data packet is increased due to inserted
dummy bits. LetNba be the number of inserted IDFS periods. Notice that the channel
timeTpba used for each data packet also includes the backoff period and the deferring time
TBIFS due to sensed undecodable signals, which is also increased if BIFS is longer than
205
EIFS. Letpba be the probability that a node near the current transmission defersTBIFS,
successfully contends for the channel and begins to transmit after the current transmission
is finished. Then1− pba is the probability that the transmission opportunity is obtained by
the current transmitter or receiver, or any another node that correctly overhears the current
transmission. Letph be the probability that a transmitter determines there is a hidden
terminal and adopts the SBA procedure. Now, we can obtain
Tpba = Tbackoff + TRTS + TCTS + TACK + 3TSIFS +
TDIFS + phNbaTIDFS + TDATA + pbaTBIFS (8.34)
According to the procedure in Section8.3.4, we have
NBA ≈ TDATA − TPHY − TMAC
TBIFS − TRT − TTR
(8.35)
It is easy to show that when
TBIFS =
√phTIDFS(TDATA − TPHY − TMAC)
pba
(8.36)
Tpba is minimized, and the minimum value is
min(Tpba) = Tbackoff + TRTS + TCTS + TACK +
3TSIFS + TDIFS + TDATA + pba(TRT + TTR) +
2√
phpbaTIDFS(TDATA − TPHY − TMAC)TIDFS (8.37)
In the FAMA scheme [47], we assume that FAMA uses the same carrier sensing range
to obtain good spatial reuse ratio as in SBA-MAC. Similarly withpba, we define it aspfama
in FAMA. The channel timeTpfama used for each data packet in FAMA is
Tpfama = Tbackoff + TRTS + TCTS + TACK +
3TSIFS + TDIFS + TDATA + pfamamax{TDATA} (8.38)
206
Similarly, the channel timeTplcs used for each data packet in the approach using a
large carrier sensing range is equal to
Tlcs = Tbackoff + TRTS + TCTS + TACK + 3TSIFS +
TDIFS + TDATA + plcsTEIFS (8.39)
SinceSlcs > Sba, there are more nodes that sense undecodable signals. That is to say,
plcs > pba (8.40)
Now we can calculate the gain of SBA-MAC compared with the approach using a
large carrier sensing range is
Ksba−lcs =Slcs
Sba
Tlcs
Tpba
, Ksba−fama =Tpfama
Tpba
(8.41)
8.5.3 Numerical Results
In this subsection, we adopt system parameters in the IEEE 802.11b standard.TaCCATime 6
15µs, SNR = 10dB,TIDFS = 2TSIFS + TaCCATime = 35µs, TEIFS = 364µs, TRTS =
352µs, TCTS = 304µs, TACK = 304µs, max{TDATA} = 10ms. Unless otherwise indi-
cated,TDATA = 8ms, TBIFS = TEIFS = 364µs, dh = dt, andph = pba = pfama = plcs =
1. In the following discussion and figures, LCS means the approach using a large carrier
sensing range.
Fig. 8–6shows SBA-MAC can greatly reduce the occupied area by each transmission
compared to LCS. The gain is from 58% to 80% with power control for data frames, and
from 80% to 144% without power control for data frames. Power control for data frames
can greatly increase the spatial reuse.
Fig. 8–7 shows the occupied area by each transmission increases along with the in-
terference radius, which is determined by the SNR requirement(Section8.2). SBA-MAC
always has a significant gain compared to LCS, which is from 31% to 141%.
207
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
5
10
15
20
25Normalized occupied area for each transmission: S/dt
2
Normalized hop distance: dh / dt
SBA-MAC w/ PCSBA-MAC w/o PCLCS w/ PCLCS w/o PC
Figure 8–6:Occupied area for a transmission normalized over the communication radius(PC: power control for DATA frames)
1 1.5 2 2.5 3 3.5 4 4.50
20
40
60
80
100Normalized occupied area S for each transmission: S/dt
2
Normalized interference radius: di / dt
SBA-MACLCS
Figure 8–7:Occupied area for a transmission normalized over the communication radiuswhendh = dt
208
0.3 0.4 0.5 0.6 0.7 0.8 0.9 19.6
9.8
10
10.2
10.4
10.6
10.8
TBIFS (ms)
Channel time used for a transmitted data frame (ms)
SBA-MACLCS
Figure 8–8:Channel time for a transmitted packet
Fig. 8–8shows the channel time for a transmitted data packet in SBA-MAC. We can
see thatTpba only changes by up to 1.9% whenTBIFS is from 364µs to 964µs although
there is apparently an optimal value ofTBIFS.
Fig. 8–9illustrates that SBA-MAC only increases the channel time for each packet by
about 0.85% to 8.5%. We plot the performance gainKsba−lcs andKsba−fama in the Figure
8–10. It demonstrates that SBA-MAC can improve the throughput by 44% to 53% when
TDATA is from 10 to 1 ms compared to the approach using a large carrier sensing range.
The improvement is about 68% to 344% compared to the FAMA scheme.
8.6 Conclusions
In this chapter, we propose a new SBA-MAC scheme to solve the hidden terminal
problem without using out-band signaling. The new scheme is based on the CSMA/CA or
the IEEE 802.11 MAC scheme. Some dummy bits are inserted in the data frame. During
the periods of these dummy bits, the receiver sends out short busy advertisement signals
to notify hidden terminals of the current transmission so that the latter defer their trans-
missions to avoid collision. Although SBA-MAC protocol increases the transmission time
of each data frame, it greatly increases the spatial reuse ratio and well address the hidden
209
0 1 2 3 4 5 6 7 8 9 102
4
6
8
10
12
Transmission time of a data frame: Tdata (ms)
Channel time used for each transmitted data frame (ms)
SBA-MAC (pba=0.1)
LCS (plcs=0.1)
SBA-MAC (pba=0.5)
LCS (plcs=0.5)
SBA-MAC (pba=1.0)
LCS (plcs=1.0
Figure 8–9:Channel time for a transmitted packet
1 2 3 4 5 6 7 8 9 101
1.5
2
2.5
3
3.5
4
4.5
Transmission time of a data frame: Tdata (ms)
Performance gain
SBA-MAC vs. LCSSBA-MAC vs. FAMA
Figure 8–10: Performance gain of SBA-MAC compared to the approach using a largecarrier sensing range and the FAMA scheme
210
terminal problem. The performance results show that SBA-MAC noticeably outperforms
the existing approaches addressing the hidden terminal problem.
CHAPTER 9A DISTRIBUTED PACKET CONCATENATION SCHEME FOR SENSOR AND AD
HOC NETWORKS
Along with the growing popularity of sensor and ad hoc networks, various kinds of
services are expected to be supported. In wireless ad hoc networks, there are increasing
demands for web traffic, voice over IP and streaming video from and to the Internet via
the access points. In sensor networks, event-driven or periodically monitoring services are
common. However, various lengths of packets are used in different services. Short packets
have relatively large overhead at the MAC (medium access control) and physical layers
and hence can significantly decrease the network throughput. In this chapter, we analyze
the performance of a distributed adaptive packet concatenation (APC) scheme which is
proposed to improve the network throughput. The APC scheme works at the interface
queue of the data link layer. It adaptively concatenates several short packets which are
destined to the same next hop into a long packet for MAC layer’s transmission according
to the congestion status as well as the observed channel status. The theoretical analysis is
conducted in both single hop networks and multihop networks, and the result shows that
the APC scheme can increase the throughput by up to 4 to 16 times.
9.1 Introduction
Recent years have seen greatly increasing interests in sensor and ad hoc networks.
These networks can be quickly deployed with low cost and provide desired mobility. They
are finding a variety of applications such as disaster rescue, battlefield communications,
inimical environment monitoring, collaborative computing and broadband mobile Internet.
And various kinds of traffic often coexists in one network, such as voice, video, email,
FTP, routing and web traffic. They have different characteristics and requirements, such as
211
212
bandwidth, delay and packet length, which provide great challenges for network protocols
to work efficiently.
Short data packets occupy a relative large channel resource due to the fixed physical
and MAC layers’ overhead. They also lead to congestions and severe MAC contentions
more easily than long data packets given a certain amount of data traffic. For the IEEE
802.11 protocols, the physical layer overhead includes a preamble, which is used to syn-
chronize the transmitter and the receiver, and some control fields to notify the receiver of
the channel coding and modulation schemes. The MAC layer overhead includes several
MAC layer control frames consisting of RTS (ready to send), CTS (clear to send) and ACK
(acknowledge), MAC address of the DATA frames and interframe spacings, such as SIFS
and DIFS. The shorter the payload of the DATA frame, the smaller the throughput and the
more the wasted channel resource.
Several schemes ([80, 65, 113, 76]) have been proposed to efficiently utilize the time-
varying channel in wireless LANs where nodes can directly communicate with each other.
When the channel quality is good, several packets are transmitted back to back with a large
channel rate at a time. Otherwise, a single packet is transmitted with a small channel rate.
These schemes are efficient in reducing the relative protocol overhead when a large channel
rate is used.
In sensor and ad hoc networks, data packets often need to be forwarded several times
before they reach the destinations. Each forwarding node needs to contend for the channel
with other nodes before it can transmit a packet. The MAC layer contention becomes more
severe when congestion happens and a lot of backlogged packets keep nodes contending
for the channel. Thus concatenating several packets into a large super packet can efficiently
reduce the MAC layer contention and collision. However, a long packet may need a long
transmission time during which the channel quality may change and hence encounter a
high probability of bit errors. Therefore it is necessary to consider the channel status when
combining the packets to guarantee that the total transmission time does not exceed the
213
channel coherence time as well as to consider the queue status to check the availability of
packets. This is the proposed adaptive packet concatenation (APC) scheme in this chapter.
And the performance of APC is analyzed theoretically in both single hop and multihop ad
hoc networks.
The rest of this chapter is organized as follows. Section II introduces the basics of the
IEEE 802.11 MAC protocol. The proposed scheme and its performance analysis are given
in Section III. Finally, Section IV concludes this chapter.
9.2 Operations of the IEEE 802.11
In this section, we discuss the basic procedures of the IEEE 802.11 MAC protocol.
Since it is widely used, we will analyze the proposed adaptive packet concatenation scheme
based on this protocol in next section.
The basic access method in the IEEE 802.11 MAC protocol is DCF (Distributed Coor-
dination Function), which is based on carrier sense multiple access with collision avoidance
(CSMA/CA). Before starting a transmission, each node performs a backoff procedure, with
the backoff timer uniformly chosen from [0, CW-1] in terms of time slots, where CW is
the current contention window. When the backoff timer reaches zero, the node transmits a
DATA packet. If the receiver successfully receives the packet, it acknowledges the packet
by sending an acknowledgment (ACK). If no acknowledgment is received within a speci-
fied period, the packet is considered lost; so the transmitter will double the size of CW and
choose a new backoff timer, and start the above process again. When the transmission of a
packet fails for a maximum number of times, the packet is dropped. To avoid collisions of
long packets, the short RTS/CTS (request to send/clear to send) frames can be employed.
The timing structure of message sequences are shown in Fig.9–1.
Note that the IEEE 802.11 MAC also incorporates an optional access method called
PCF (Point Coordination Function), which is only usable in infrastructure network config-
urations of wireless LANs and does not support multihop communications. In this chapter,
we thus focus on the IEEE 802.11 DCF.
214
���� ���� ����
�������
��� ����
���
������� ����������������������
����
���
����
����
����
���
���������
������������������� ��������������!
������� "�����������������
��#���������!
���$����� ��������������!
�������� ��#����������!
������� ��������� �������
���� ���� ����
�������
��� ���� ������� ����������������������
����
����
���
����������
������������������� ��������������!
������� "�����������������
��#���������!
���$����� ��������������!
�������� ��#����������!
������� ��������� �������
Figure 9–1:RTS/CTS mechanism and basic access mechanism of IEEE 802.11
9.3 Adaptive Packet Concatenation (APC) Scheme and Performance Analysis
In this section, we first introduce the basic mechanisms of APC scheme. Then we
analyze how much this scheme can improve the throughput in both single hop and multihop
networks.
9.3.1 Basic Scheme
APC works at the data link layer consisting of a shared interface queue and a MAC
sublayer as shown in Fig.9–2. It concatenates several packets in the interface queue which
have the same next hop into a super packet. A super packet instead of the original packets
is sent to the MAC layer each time when the MAC layer is idle and the queue is not empty.
The super packet structure is shown in Fig.9–3. It contains one or more data packets.
The subfields for each data packet consist of three parts: length, the data packet itself and an
optional CRC field. The length subfield is used at the receiver to split the super packet into
the original data packets. The CRC subfield is used to check the integrity of the data packet
to combat the possible channel bit errors. It should be used if the receiver enables selective
acknowledgements which can indicate which data packets are corrupted by channel errors
and need retransmissions. If not all data packets have no errors, the transmitter only needs
215
Application Layer
Transport Layer
Network Layer
Interface Queue
MAC
Physical Layer
Data Link Layer
Figure 9–2:Protocol stack
Length: L1 1st packet
2 bytes L1 bytes
Length: Ln nth packet
2 bytes Ln bytes
CRC1
2 bytes
CRCn
2 bytes
MAC header
A super packet
Figure 9–3:The super packet structure
to retransmit those corrupted data packets and reconstruct the super packet according to the
available data packets in the queue at each time of retransmission.
The lengthlsp of a super packet is always less than or equal to aconcatenation thresh-
old Lth. This threshold is determined by the channel coherence timeTcc during which the
channel quality remains stable [113]. The transmission timetsp of a super packet includes
the transmission time of the physical and MAC layer overhead and the transmission time
for the super packet itself. Andtsp must be less than or equal totcc. Thus we have
Lth = rdata × (Tcc − THphy − THMAC − TACK − sifs) (9.1)
for the case that there is no RTS or CTS, whererdata is the data rate of the DATA frame,
andTHphy andTHMAC are respectively the transmission times of the physical and MAC
headers of a DATA frame, and
Lth = rdata × (Tcc − THphy − THMAC − TRTS
−TCTS − TACK − 3sifs)(9.2)
216
for the case that RTS and CTS are used, whereTRTS, TCTS andTACK are respectively the
transmission times of the RTS, CTS and ACK frames.
Each time when the MAC layer picks up DATA packets from the interface queue and
starts channel contention, APC concatenates the packet at the head of queue with several
other packets that have the same next hop. These packets appear in the super packet in the
order that they appear in the queue. The concatenation ends when concatenating one more
packet will make the length of the super packet exceedLth.
To support multiple channel rates, APC calculatesLth using the current transmission
rate of the MAC layer. There are basically two methods to determine the transmit rate
rdata. First, it can be determined by the history. The transmitter determinesrdata according
to the received powerPr of the last ACK frame from the next hop if the last transmission
is successful. Otherwise it uses a lower rate or the lowest available rate. In the second
method, the transmit raterdata is determined by the received powerPr of the CTS frame
from the next hop. The first method depends on the result of previous transmission and
may conclude with a wrong channel quality because a transmission failure can result from
a collision as well as poor channel quality. The second method uses the short RTS/CTS
frames to probe the channel quality before the DATA transmission and has a more accurate
channel information. Although the second method requires RTS/CTS frames, RTS/CTS
are also useful to shorten the collision periods. Therefore, APC uses the second method to
determinerdata.
To utilize multiple channel rates, we must notice that different channel rates have dif-
ferent requirements of the received power thresholdRXthresh and the signal to interference
plus noise ratio (SINR). The widely used IEEE 802.11b support 1, 2, 5.5, and 11Mbps. In
Equation (9.3), RXthreshi andCPthreshi (1 6 i 6 4) are the thresholds required by the hard-
ware to correctly decode the received signals. For example, the requirements of a PCMCIA
Silver/Gold card by Orinocco are thatRXthresh1 = −94dBm, RXthresh2 = −91dBm,
217
rdata =
1Mbps (RXthresh1 6 Pr < RXthresh2 andSINR > CPthresh1)2Mbps (RXthresh2 6 Pr < RXthresh3 andSINR > CPthresh2)5.5Mbps (RXthresh3 6 Pr < RXthresh4 andSINR > CPthresh3)11Mbps (Pr > RXthresh4 andSINR > CPthresh4)
(9.3)
RXthresh3 = −87dBm, RXthresh4 = −82dBm, CPthresh1 = 4dB, CPthresh2 = 7dB,
CPthresh3 = 11dB, andCPthresh4 = 16dB.
9.3.2 Performance Analysis of the Network Throughput in the Single Hop Case
In this subsection, we analyze that how much improvement APC can achieve for the
saturated throughput and the maximum throughput in the case that the IEEE 802.11 MAC
protocol is used in a single hop network.
Let Rs denote the ratio of the time periods with successful transmissions to the total
time. Then, following the techniques in the papers [150, 15], we have
Rs = psTs
piσ+psTs+(1−pi−ps)Tc
pi = (1− pt)n
ps = npt(1− pt)n−1
p = 1− (1− pt)n−1
(9.4)
whereTs is the average successful transmission time,Tc is the average collision time,σ is
a MAC layer idle slot time,pt is the transmission probability of each node in any slot,n
is the total number of nodes in the network, andp is the collision probability that a node
encounters collision whenever transmitting. And from [68],
Ts = TRTS + TCTS + Tdata + TACK + 3sifs + difs
Tc = TRTS + sifs + TCTS + difs, (9.5)
for the case where the RTS/CTS mechanism is used, and
Ts = Tdata + TACK + sifs + difs
Tc = Tdata∗ + TACK timeout + difs, (9.6)
218
S =n(1− p)(1− (1− p)
1n−1 )× Lp
(1− p)n
n−1 σ + n(1− p)(1− (1− p)1
n−1 )Ts + (1− n(1− p)(1− (1− p)1
n−1 )− (1− p)n
n−1 )Tc
(9.10)
SAPC =n(1− p)(1− (1− p)
1n−1 )× Lspl
(1− p)n
n−1 σ + n(1− p)(1− (1− p)1
n−1 )Ts + (1− n(1− p)(1− (1− p)1
n−1 )− (1− p)n
n−1 )Tc
(9.11)
for the case where there is no RTS/CTS mechanism, whereTdata andTdata∗ (please refer
to [15, 160] for derivation ofTdata∗) are the average length, in seconds, for the successful
transmission and collision of the data packets, respectively. If the average packet length is
Lp, then
Tdata =Lp
rdata
+ THphy + THMAC (9.7)
Now the network throughputS can be expressed asRs multiplied by the DATA trans-
mission raterdata excluding the physical and MAC layers’ overhead, i.e.,
S = Rs ×Lp
rdata
Ts
× rdata =psLp
piσ + psTs + (1− pi − ps)Tc
(9.8)
For the saturated case where each node always has a packet contending for the shared
wireless channel, Bianchi [15] derived the formula for the transmission probabilitypt at
any slot in terms ofp. Considering a finite retransmission limit followed by the packet
dropping, we further derivedpt in the paper [160] as
pt =
2(1−pα+1)1−pα+1+(1−p)W (
Pαi=0 (2p)i)
2(1−pα+1)
1−pα+1+pWPm−1
i=0 (2p)i+W (1−2mpα+1)
, α 6 m
,α > m
(9.9)
whereα is the maximum allowed retransmission times,W is the minimum contention
window size, and2mW is the maximum contention window size. By Equations (9.8) and
(9.9), we can derive the value forp, pt andS for the saturated case, referred asp, pt andS.
For the non-saturated case where not all the nodes are contending for the channel, the
collision probabilityp is smaller thanp and hence may achieve a larger throughput. From
Equation (9.4) and (9.8), S can be expressed as the function ofp. S is equal to0, S and0
219
whenp = 0, p and1, respectively. To obtain the maximum value ofS, denoted byS∗, and
the corresponding value ofp, denoted byp∗, let
d
dpS = 0 (9.12)
Let p be the root of the Equation (9.12). Then
p∗ = min(p, p) (9.13)
In the APC scheme, the network throughputSAPC can be calculated with the Equation
(9.11), which is obtain by excluding the APC overhead in Equation (9.10). HereLspl is the
average total length of the concatenated packets in a super packet. For the case that the
packet lengthLp is fixed, we have
Lspl = b Lth
Lp+4cLp
Lsp = bLth−1Lp+4
c(Lp + 4)(9.14)
whereb Lth
Lp+4c is the greatest integer less than or equal toLth
Lp+4, andLsp is the average length
of a super packet. In Equation (9.11), Ts andTc are calculated by Equations (9.5) (9.6) and
(9.7) according to the average super packet lengthLsp, while in Equation (9.10), Ts andTc
are calculated according to the average packet lengthLp.
Now the network throughputs for the IEEE 802.11 protocol with and without the APC
scheme can be calculated by Equations (9.10) and (9.11) usingp∗ and p. The numerical
results are shown in Fig.9–4wheren = 200. The parameter values of the IEEE 802.11
system are shown in Table9–1.
From Fig. 9–4, we have two important observations. First, the APC scheme can
greatly increase the throughput when the packet length is smaller than a half of the con-
catenation thresholdLth. For the saturated case, the throughput of APC scheme is up to 3.5
times of that of the IEEE 802.11 protocol when the data packet length is equal to 100bytes.
And the maximum throughput of APC scheme is up to 2.7 times of that of the IEEE 802.11
220
0 500 1000 1500 2000 25000.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Packet length Lp (bytes)
Agg
rega
te t
hrou
ghpu
t (M
bps)
maximum throughput: 802maximum throughput: 802+APCsaturated throughput: 802saturated throughput: 802+APC
Figure 9–4: Throughput when channel rate is 1Mbps,Lth = 2346bytes and RTS/CTSmechanism is used.
Table 9–1:IEEE 802.11 system parameters
Channel Bit Rate 1 Mbit/sPHY header 192 bitsMAC header 224 bitsLength of RTS 160bits + PHY headerLength of CTS 112bits + PHY headerLength of ACK 112bits + PHY headerInitial backoff window size (W) 32Maximum backoff stages (m) 5Short retry limit 7Long retry limit 4
protocol. Second, a smaller collision probability is desired to obtain a larger throughput
since the maximum throughput is always larger than the saturated throughput. Specifically,
the maximum throughput of the IEEE 802.11 protocol is much larger than the saturated
throughput of the IEEE 802.11 protocol especially when the data packets are short. The
improvement ranges from 4% to 32% when the packet length decreases from 2346 to 100
bytes. When the APC scheme is used, the improvement ranges from 4% to 7%. In addi-
tion, a smaller collision probability is also required to achieve a shorter delay and a better
energy efficiency. It is desired to design a scheme to support small collision probability
221
1 2 5.5 110
2
4
6
8
10
12
Channel rate (Mbps)
Agg
rega
te t
hrou
ghpu
t (M
bps)
maximum throughput: 802maximum throughput: 802+APCsaturated throughput: 802saturated throughput: 802+APC
Figure 9–5:Throughput when channel rate is 1, 2, 5.5 and 11Mbps and RTS/CTS mecha-nism is used.
while achieving or approaching the maximum throughput. One such scheme can be found
in the paper [149].
Fig. 9–5 shows the throughput at different channel rates where the packet length
Lp = 512 bytes and the channel coherence timeTcc is the same as that in Fig.9–4. In the
APC scheme, the throughput approximately linearly increases along with the channel rate.
However, the throughput of the IEEE 802.11 protocol does not increase much along with
the channel rate. This is because the relative protocol overhead is much larger for a higher
channel rate in the IEEE 802.11 protocol. The improvement of the APC scheme is up to
6.2 and 4.5 times when the channel rate is 11Mbps for saturated throughput and maximum
throughput, respectively.
9.3.3 Performance Analysis of the Network Throughput in a Multihop Network
In a multihop wireless network, the collision probability is not easy to derive. As in
the single hop network, each node has to contend for the channel with the nodes in its own
carrier sensing range. Furthermore, the hidden terminals of one transmitter, which may be
two-hop away and cannot sense the transmission, may initiate a new transmission which
222
1 2 3 4 5 6 7 8 9 10 11 12
Figure 9–6:Chain topology.
introduces a collision at the intended receiver of the ongoing transmission. This kind of
collision depends on the network topology and is difficult to be characterized.
In this section, we derive the maximum throughput that the IEEE 802.11 protocol and
APC scheme can achieve instead of their exact throughput which is different for different
network deployment. Then we will discuss how to approach this maximum throughput in
a wireless multihop network. We first study a multihop flow which travels through a chain
topology as shown in Fig.9–6where small circles denote the transmission range and large
circles denote the carrier sensing range.
Maximum throughput of a multihop flow is achieved when the packet scheduling fully
utilizes the space resource, i.e., scheduling as many as possible concurrent transmissions
with a SINR that is high enough for a correct decoding at the receivers. At another hand,
nodes will not initiate any new transmissions if they sense a busy channel due to the re-
quirement of carrier sense procedure in the IEEE 802.11 protocol. Thus we have two
requirements for maximum spatial reuse. First, there is only one transmission in the carrier
sensing range of each node. Second, the power ratio of the received signal to the inter-
ferences from other transmissions must be larger than or equal to a certain threshold as
shown in Equation (9.3). Let γ denote the path loss exponent, then the power levelPr of
the received signal equals
Pr = Po(do
dh
)γ (9.15)
223
wheredo is the distance between the transmitter and a reference point,Po is the power level
of the signal received at the reference point anddh is the distance between the transmitter
and the intended receiver. In the regular chain topology in Fig.9–6, dh is also the hop
distance.
In the chain topology, the strongest interference comes from the concurrent transmis-
sion which is closest to the receiver. Other interference can be neglected for a much smaller
power level. Letdi denote the distance between two concurrent transmitters in the chain
topology. For example, if transmitter-receiver pair (1,2) and (5,6) can be scheduled to
transmit at the same time, thendi = 4dh. Let Pi denote the power level of the interference
signal. Given a certain requirement of SINR, we have
SINR 6 Pr
Pi
= (di − dh
dh
)γ ⇒ di > dh(SINR1γ + 1) (9.16)
Thus the minimum hop distanceN between two concurrent transmitters equals
N = dSINR1γ e+ 1 (9.17)
wheredxe is the ceiling function and equals the largest integer larger than or equal tox.
Thus the maximum end-to-end throughputSchain of a multihop flow in a regular chain
topology is
Schain =Lp
Ts
× 1
N=
Lp
Ts(dSINR1γ e+ 1)
(9.18)
whereLp
Tsis the maximum throughput at each hop andN is the spatial reuse ratio. For the
APC scheme, the maximum end-to-end throughputSchain APC is obtained by using a super
packet instead of a data packet:
Schain APC =Lspl
Ts(dSINR1γ e+ 1)
(9.19)
whereTs is calculated according to the length ofLsp.
Fig. 9–7shows the maximum end-to-end throughput of a multihop flow with at least
four hops in the regular chain topology where we setdo = 1m, Po = 0dBm, γ = 4 and
224
1 2 5.5 110
0.5
1
1.5
2
2.5
3
Channel rate (Mbps)
End
-to-
end
thro
ughp
ut (
Mbp
s) 802.11802.11+APC
Figure 9–7:Maximum end-to-end throughput of a multihop flow.
100 500 1000 1500 2000 24000
0.5
1
1.5
2
2.5
3
Data packet length (bytes)
End
-to-
end
thro
guhp
ut (
Mbp
s)
802802.11+APC
Figure 9–8:Maximum end-to-end throughput of a multihop flow.
the data packet length is 512 bytes. The requirement of SINR adopts the values discussed
at the end of Section9.3.1. The maximum end-to-end throughput of the APC scheme is
1.24, 1.53, 2.52 and 4.08 times of that in the IEEE 802.11 protocol when the channel rate
is equal to 1, 2, 5.5 and 11Mbps, respectively. Fig.9–8where the channel rate is 11Mbps
shows that the APC scheme can achieve a stable and much higher end-to-end throughput at
different packet length. The throughput of the APC scheme is 1.62 to 16.50 times of that of
the IEEE 802.11 protocol when the packet length decreases from 2246bytes to 100bytes.
To achieve the maximum end-to-end throughput, we must alleviate the hidden termi-
nal problem as much as possible. In the chain topology, to avoid a node becoming a hidden
terminal and introducing a collision, the carrier sensing range must be large enough to
includes the nodes which can introduce enough interference to corrupt the ongoing trans-
mission. Thus the radius of the carrier sensing rangedc must satisfy
dh(SINR1γ + 1) 6 dc 6 dh(dSINR
1γ e+ 1) (9.20)
225
where the left inequation prevents the collision from the hidden terminal problem and the
right inequation makes it possible for the maximum spatial reuse ratio.
Besides the hidden terminal problem, we also need to address the unfair medium ac-
cess probability at each forwarding node to maximize the end-to-end throughput. One such
scheme can be found in the paper [162], which addresses both medium contention and net-
work congestion and can well approach the above maximum end-to-end throughput. For a
multihop flow in a more general topology, the maximum end-to-end throughput depends on
the bottleneck location where there are the poorest spatial reuse and the most interference
from other flows. We leave the analysis of such topology to the future work.
9.4 Conclusion
In this chapter, we propose a distributed adaptive concatenation scheme for the sensor
and wireless ad hoc networks. It adaptively concatenates several short data packets into a
large super packet according to the current channel quality and queue status. It effectively
reduces the relative protocol overheads especially when multirate capability of the IEEE
802.11 protocol is considered and the data packet is short, which are the case for many
applications. We also derive the throughput of the proposed scheme in both single hop
networks and multihop networks. The analytical results show that this scheme can improve
the throughput by up to 4 to 16 times.
CHAPTER 10IMPACT OF ROUTING METRICS ON PATH CAPACITY IN MULTIRATE AND
MULTIHOP WIRELESS AD HOC NETWORKS
Finding a path with enough throughput in multihop wireless ad hoc networks is a cri-
tique task of QoS Routing. Previous studies on routing algorithms focused on networks
with a single channel rate. The capability of supporting multiple channel rates, which is
though common in wireless systems, has not been carefully studied in routing algorithms.
In this chapter, we first perform a comprehensive study on the impact of multiple rates,
interference and packet loss rate together on the maximum end-to-end throughput or path
capacity. A linear programming problem is formulated to find path capacity of any given
paths. This problem is also extended to a joint routing and link scheduling optimization
problem to find a path with the largest path capacity. We prove thatinterference clique
transmission timeis inversely proportional to the upper bound of the path capacity, and
hence we propose it as a new routing metric. Based on the proposed optimization prob-
lems, we evaluate the capability of various routing metrics including hop count, expected
transmission times, end-to-end transmission delay or medium time, link rate, bandwidth
distance product, interference clique transmission time, to find a path with high through-
put. The results show that interference clique transmission time is a better routing metric
than others.
10.1 Introduction
Wireless ad hoc network has attracted a lot of attention in recent years. It can be easily
deployed at low cost and can support wireless communication via multiple wireless hops
without the help of infrastructure networks, such as wireless base stations and Internet. It is
often referred to as different names in different scenarios, such as wireless sensor networks,
226
227
mobile ad hoc networks and wireless mesh networks, where there exists multihop wireless
communication.
To support end-to-end communication in these networks, routing algorithms play a
significant role in finding good paths and forwarding nodes between sources and their des-
tinations. However, finding a good path is not an easy task in a wireless ad hoc network
compared in wired networks because wireless links are significantly different from wired
ones. First, wireless links are not reliable due to channel errors. Second, achievable channel
rates may be different at different links because link quality depends on distance and path
loss exponent between two neighbors. Third, links may not exist any more when neighbors
move out of the communication range. Fourth but not the last, wireless transmission is
broadcast in nature and a transmission over one link will interfere with transmissions over
other links in the neighborhood.
To address these challenges, considering the features of physical layer and MAC layer
is a must for a good routing algorithm. However, existing wireless ad hoc routing protocols
typically find routes with the minimum hop-count, shortcomings of which have been recog-
nized in multihop wireless networks by much prior research. De Couto etc. showed in the
paper [35] that many of shortest paths have poor throughput due to loss rates over the radio
links selected in these paths. They accordingly proposed in the paper [36] a new routing
metricexpected transmission count (ETX)to consider packet loss rates over wireless links
to obtain higher throughput. In the paper [75], Jain etc. studied the impact of interference
on performance of multihop wireless network with an NP-complete optimization problem.
They showed that by considering interference, routes derived from the optimization prob-
lem often yield noticeably better throughput than the shortest path routes. In the paper
[77] and [56], Jia and Gupta etc. further proposed heuristic algorithms to consider inter-
ference by solving an optimization problem and find paths satisfying a certain bandwidth
requirement.
228
Besides packet loss rate and interference, multirate capability is another common fea-
ture of wireless links. A higher data rate can be used to improve throughput if a better
signal quality is observed over one link. However, a higher data rate often means a shorter
transmission distance and hence more hops in the selected path. The data rate of one link
is also subject to change because of a time-varying channel and changing interference in
the neighborhood. Notice that packet loss ratio may not be as significant as discussed in
the paper [75] if an auto-rate MAC protocol is adopted like in the IEEE 802.11 protocol.
A low rate is automatically used when a high packet loss rate is observed and hence leads
to a low packet loss rate because of a less strict requirement of SNR (signal to noise ratio).
Not surprisingly, multirate capability has a great impact on routing algorithms and
hence deserves careful studies in multihop wireless ad hoc networks. It seems natural that
end-to-end throughput will be improved if we allow multiple rates to coexist in the network,
where a higher channel rate is used over each link if it can deliver more packets in the same
period with the consideration of packet loss rates. However, in the paper [84], Kawadia and
Kumar showed that a single-rate wireless ad hoc network may have better performance than
the network where multiple rates coexist if the shortest-hop routing algorithm is used. The
reasons behind their findings are that a shortest-hop routing algorithm often choose links
with the lowest channel rate while a fixed higher channel rate may be still able to generate
a feasible path between the source and its destination and leads to a higher end-to-end
throughput.
Several papers in the literature have already started to design good routing metrics in
a multirate wireless ad hoc network. In the paper [39], Draves, Padhye and Zill proposed
to use weighted cumulative expected transmission time (WCETT) as a routing metric. In
the paper [6], Awerbuch, Holmer and Rubens adopted medium time metric (MTM). in the
paper [155], Zhai and Fang studied the impact of multirate on carrier sensing ranges and
229
spatial reuse ratio and accordingly demonstrated that bandwidth distance product and end-
to-end transmission delay (same with medium time) are better routing metrics than the hop
count.
However, there is still no comprehensive study on evaluation of the capability of these
routing metrics in maximizing end-to-end throughput with consideration of coexisting mul-
tiple rates and their close relationship with packet loss rate and interference. These factors
make it difficult to design a good routing metric to find the path with the widest bandwidth.
We use a simple example in Fig.10–1to illustrate why some routing metrics fail to do so.
In Fig. 10–1, all users are assumed to transmit over the same channel with a fixed
transmission power and conform to the IEEE 802.11 protocols. Suppose the highest achiev-
able channel rate over links along path 1 fromS1 andD1 is 2Mbps, and the highest achiev-
able channel rate over links along path 2 is 54Mbps. Apparently, if SNR requirement for
1Mbps is larger than 0dB, transmissions over any two hops along path 1 cannot be success-
ful at the same time. Then the maximum end-to-end throughput of path 1 is proportional
to 23Mbps. Suppose for the same reason, there is also only one successful transmission
allowed at a time along path 2. The maximum end-to-end throughput along path 2 is
5412
= 4.5Mbps. It is similar for path 3 and 4 except that path 4 passes a large number
of short hops resulting in a very long end-to-end transmission delay. Suppose that trans-
missions along path 4 can be simultaneously successful every other 11 hops and so the
maximum end-to-end throughput of path 4 is similar to that of path 2, i.e.,4.5Mbps. It is
straightforward that path 1 will be selected fromS1 to D1 if a routing algorithm minimizes
the hop count. Minimizing transmission times still leads to path 1. Minimizing end-to-end
transmission delay/medium time or maximizing the minimum bandwidth distance product
over all links along the path will generate path 2. For path 3 and 4 fromS2 to D2, hop
count, ETT and end-to-end transmission delay all lead to path 3 while bandwidth distance
product leads to path 4 with a much higher throughput than path 3. It seems bandwidth
230
path 3
path 4
S2
D2
path 1
S1
D1
path 2
Figure 10–1:Paths between the sourceS and the destinationD
distance product works better than all others to find paths with high throughput. However,
does it work well in a more general topology? Does there exist a even better routing metric?
In this chapter [158], we endeavor to address all the factors together in an extended
link conflict graph model. A linear programming optimization problem is formulated to
solve the path capacity or maximum end-to-end throughput of a given path. The solution
of the path capacity in some scenarios implies that interference clique transmission time is
a good routing metric to find paths with high throughput. The solution of the optimization
problem establishes a foundation to evaluate the relative performance of different routing
metrics. The model is also extended to a joint optimization problem of link scheduling
and routing algorithm to find the optimum path between the source and the destination that
have the largest end-to-end throughput. Though the joint optimization problem requires a
centralized implementation and is NP-complete, it provides a measure how good the rout-
ing metrics really are compared to the best possible one. The results show that end-to-end
transmission delay and interference clique transmission time are the best two among all the
metrics mentioned above on average, and interference clique transmission time constantly
leads to paths with throughput close to the optimum one and higher than those obtained
based on other routing metrics. In addition, interference clique transmission time can find
paths with up to 10% more throughput than end-to-end transmission delay especially when
231
the distance between the source and its intended destination is long say about more than
4 hops in the shortest hop routing algorithm. Furthermore, we illustrate that good routing
metrics can generate paths with higher throughput in a multirate wireless ad hoc network
than any routing metrics in a single-rate wireless ad hoc network with any single possible
channel rate.
The rest of this chapter is organized as follows. Section10.2 discusses the impact
of multirate capability on the network performance. We extend the link conflict graph to
consider multirate, interference and packet loss rate together to solve path capacity of any
given path in the network in Section10.3. In Section10.4, we extend the bellman-ford
routing algorithm to utilize several different routing metrics. The relative performance of
different routing metrics is evaluated in Section10.5. Finally, Section10.6concludes this
chapter.
10.2 Impact of Multirate Capability on Path Selection In Wireless Ad HocNetworks
In wireless ad hoc networks, a channel rate over each link can be adaptively selected
according to the link signal quality. When the signal quality is good, a high channel rate
is used. Otherwise, a low channel rate is used. This auto rate selection has been widely
adopted by the 802.11 products. In this section, we study the impact of multiple channel
rates on the path selection in wireless ad hoc networks and try to identify the important
factors we should consider in the path selection.
10.2.1 Receiver Sensitivity and SNR for Multiple Rates
Wireless devices have to satisfy two conditions to correctly decode one received packet.
First, the received signal strength of the intended packet must be larger than a thresh-
old, which is called receiver sensitivity. Second, the signal to noise-plus-interference ratio
(SNR) has to be larger than a certain threshold. Receiver sensitivity defines a transmission
232
Table 10–1:Signal-to-noise ratio and receiver sensitivity
Rates (Mbps) SNR (dB) Receiver sensitivity (dBm)54 24.56 -6548 24.05 -6636 18.80 -7024 17.04 -7418 10.79 -7712 9.03 -799 7.78 -816 6.02 -82
range only in which a transmission can be successful. SNR indicates how much interfer-
ence can be tolerated and determines the spatial reuse ratio, i.e., the maximum number of
concurrent successful transmissions in a certain area.
Wireless systems normally support multiple channel rates, like UWB and 802.11 sys-
tems. For example, all the IEEE 802.11 a/b/g standards support multiple channel rates.
Specifically, 1, 2, 5.5, and 11Mbps are supported by the 802.11b. 6, 9, 18, 24, 36,
and 54Mbps are supported by the 802.11a/g. Different channel rates have different re-
quirements of the receiver sensitivity and SNR. Table10–1shows the requirement of one
802.11a product [143]. Therefore, transmission radius and spatial reuse ratio may be sig-
nificantly different for different channel rates.
10.2.2 Tradeoff between the rate and the transmission distance
A higher channel rate can achieve higher throughput than a lower channel rate over
one link. However, it often has a shorter maximum transmission distance [32] because of
its higher requirement of the receiver sensitivity and SNR. Therefore, using higher channel
rates at the forwarding nodes often results in more hops between a source and its intended
destination. On the contrary, a path with smallest number of hops often travel through links
with those low channel rates, and hence may suffer from throughput loss.
10.2.3 Carrier Sensing Range, Interference and Spatial Reuse
In the CSMA/CA (carrier sense multiple access with collision avoidance) MAC proto-
col, like the IEEE 802.11 MAC protocols, each node should senses an idle channel before
233
any transmission. The area around one node, in which it can sense transmissions from
other nodes, is called its carrier sense range. Therefore, in each carrier sense range, there
is at most one successful transmitter or transmission.
Because a higher channel rate has a shorter transmission distance, it requires more
hops to travel through one carrier sense range than a lower channel rate. Therefore, the
spatial reuse ratio is low for high channel rates. Here thespatial reuse ratiois measured by
the reciprocal of the number of hops between any two concurrent successful transmissions.
For example, using 54Mbps, the maximum spatial reuse ratio may be able to achieved by
scheduling concurrent transmissions at links that are at least 8 or more hops away from
each other [155]. On the other hand, this hop number when the maximum spatial reuse
ratio is achieved can be 3 for 1Mbps.
The other reason that a high channel rate has a low spatial reuse ratio is its high
requirement of SNR. Assuming that the transmission power is the same for the intended
signal and the interference signal, the SNR is proportional to
SNR ∝ (di
dh
)γ (10.1)
wheredh is hop distance or the distance between the transmitter and the receiver,di is the
distance between the receiver and the interfering node, andγ is the path loss exponent.
Thus a higher SNR requires a large value of( di
dh)γ, leading to a lower spatial reuse ratio.
10.2.4 Effective Data Rate and Protocol Overhead
Although the channel rates have nominated values, the effective data rates seen by
an application may be much smaller than these values. They are closely related with the
packet size and protocol overhead. In wireless systems, a preamble is often used for syn-
chronization between the sender and the receiver. It has a fixed value per standard and
can be regarded as the physical layer overhead. Besides the physical layer overhead, MAC
layer head, IP head and TCP head of each packet also have fixed length, and does not
change with the channel rate.
234
The effective data raterd can be computed as
rd =Lpl
Tpreamble +LH+Lpl
rc
(10.2)
whereTpreamble is the time not related with the channel raterc, Lpl is the length of payload
we intend to transmit, andLH is the length of protocol overhead transmitted with the
channel raterc. Tpreamble includes the physical layer preamble and may also includes some
MAC layer overhead, e.g. interframe spacing.LH includes the MAC, IP and TCP layers’
packet head. For an example in 802.11, if RTS/CTS/ACK are transmitted with the basic
rate and DATA is transmitted with the selected channel raterc, then
Tpreamble = (TRTS + TCTS + 2TSIFS)ϕ+
TSIFS + TDIFS + Tphy + TACK
ϕ =
1, (if RTS/CTS are used)
0, (if RTS/CTS are not used)
(10.3)
whereTRTS, TCTS, andTACK are the time for the transmission of RTS, CTS, and ACK
frames, respectively.Tphy is the time for the transmission of the physical preamble of the
MAC DATA frame. TSIFS andTDIFS are the interframe spacing time of SIFS and DIFS,
respectively. IfLpl approaches infinite,rd approachesrc.
Given the length of a packet payloadLpl, the larger the channel rate, the larger ratio
the preamble occupies in the transmission time of a packet, which means a heavier protocol
overhead. A high channel rate is normally preferred, but the corresponding large protocol
overhead must be carefully considered ([142, 155]).
10.3 Path Capacity in Wireless Ad Hoc Networks
It is a fundamental issue to know the maximum end-to-end throughput, referred as
path capacitythereafter, of a given path or multiple paths in the wireless ad hoc networks.
Any traffic load larger than path capacity is not supported and even deteriorates the perfor-
mance as a result of excessive medium contention [162, 151, 152, 157]. The knowledge
235
A B C D E F
1 2 3 4 5
1 3
2 4
5
Figure 10–2:A five-link chain topology and its link Conflict graph
of path capacity can be used to reject any excessive traffic in the admission control for
real-time services. It can be also used in routing algorithms to find a path with the largest
capacity or to evaluate the performance of different routing algorithms. Furthermore, the
derivation of path capacity may also suggest novel and efficient routing metrics.
However, it is not easy to derive path capacity for paths in the wireless ad hoc net-
works, considering all the factors discussed previously. In this section, we first extend the
link conflict graph model to describe necessary conditions required by those factors. Then
we formulate the problem into a link scheduling problem with the help of the flow conflict
graph.
In this chapter, we assume that there is no power control scheme or the transmission
power at each node is known before link scheduling.
10.3.1 Link Conflict Graph
According to the interference relationships between links, we can construct the link
conflict graph, where each node represents one link and each edge represents that there is a
conflict between the two corresponding links. For example, a five-link chain topology and
its link conflict graph are shown in Fig.10–2. Link 1 and 2 conflict with each other because
node B cannot transmit and receive at the same time. Link 1 and 3 conflict with each other
because node C’s transmission will introduce enough interference for the receiving at node
B. Link 1 and 4 do not conflict with each other if node D’s transmission will not interfere
with the receiving at node B.
236
The link conflict graph can be constructed on different physical layer models. In the
protocol model, any other transmitter has to be at least a certain distance away from an
ongoing receiver. In the carrier sensing model, any other transmitter has to be at least
a certain distance away from an ongoing transmitter. In the physical model, the aggre-
gate power from all other ongoing transmissions plus the noise power must be less than
a certain threshold so that the SNR requirement at an ongoing receiver is satisfied. In
the bi-directional transmission model, such as the 802.11 where the two-way handshake
DATA/ACK or four-way handshake RTS/CTS/DATA/ACK are used for each transmission,
both the transmitter and the receiver of one link has to satisfy the requirements from one
or more of the above models. Some mixed models can also be adopted, such as a model
considering the requirements from both the carrier sensing model and the physical model.
In this chapter, we call one model as adistance modelif it considers the distance
between the considered link and one other link at a time like in the carrier sensing model.
One model is called as ainterference modelif it considers the impact of interference power
level from other links like in the physical model. Amixed modelconsiders the requirements
of both the above models. All these models can be characterized by a weighted conflict
graph. A wightwij describes the impact of linki on link j, and
wij =
Prj(i)Prj(j)
SNRj−PN
, (interference model)
b (0 or 1), (distance model)
max{ Prj(i)Prj(j)
SNRj−PN
, b}, (mixed model)
(10.4)
wherePrj(i) and Prj(j) are the received power at linkj from the transmissions over
link i andj, respectively,PN is the noise power,SNRj is requiredSNR for a successful
transmission at linkj, andPrj(j)
SNRj− PN is maximum allowable interference at linkj.
If∑
i∈S,i6=j wij < 1, the transmission at linkj will be successful if all links belonging
to the setS are simultaneously transmitting. If this condition is true for allj ∈ S, the
transmissions at all the links inS can be scheduled successfully at the same time. Such
237
a set is called anindependent set. If adding any one more link into an independent setS
results in a non-independent set,S is called amaximum independent set. For a set of links,
if any two links in the set cannot be scheduled to transmit successfully at the same time,
we refer to the set as aninterference clique. If the set is not a clique any more after adding
any one more link, it is also referred to as amaximum interference clique.
10.3.2 Upper Bound of Path Capacity in Single Interference Model
In the single interference model, any two linksLi andLj conflict with each other
if weight wij defined in Equation (10.4) is larger than or equal to 1 and do not conflict
otherwise, and the conflict relationship is independent on any other links.
In this subsection, we assume that the link rate is determined by the received power
and is equal to the maximum available rate satisfying the requirement of receiver sensitiv-
ity. We will discuss in Section10.3.4a more general case where the link rate is determined
by both receiver sensitivity and surrounding interference.
Let i be the index of available channel rates andPse(i) be the receiver sensitivity for
theith channel rateri. Indexi increases when channel rate increases, and ifj > i, rj > ri
andPse(j) > Pse(i). Then the link raterc is determined by the receiving powerPr at the
receiver of the link.
rc = ri if Pse(i + 1) > Pr > Pse(i) (10.5)
Givenrc for each link,wij can be calculated for any two links, and the link conflict graph
can be constructed accordingly for a given topology.
Now let us define a new metricinterference clique transmission timeTC for one clique
C in the link conflict graph, and
TC =∑
l∈C
Tl (10.6)
whereTl is the transmission time for a packet over linkl. For a given pathP , find the set
S of all the maximum interference cliqueC for the links belonging toP . Let T ∗P be the
238
maximum value ofTC for all cliques ofP and
T ∗P = max
C∈STC (10.7)
Notice that finding all the maximum cliques for a graph is a NP hard problem. However, the
number of links of a path in wireless networks is normally limited to a very small number.
The brute-force algorithm can finish finding them in a reasonable time if the number of
links is small.
GivenT ∗P , the path capacityCP is upper bounded by
CP 6 Lp
T ∗P
(10.8)
whereLp is the packet length. This can be easily shown by the following fact.T ∗P is the
interference clique transmission time of one cliqueC of P . Considering one linkl in C
and any one packet successfully delivered from the source to the destination, the packet
takes timeT ∗P to travel through all the links inC, and link l cannot schedule any other
transmission during the periodT ∗P . That means the packet takes at least timeT ∗
P at link
l, and the throughput at linkl is less than or equal toLp
T ∗P. Because end-to-end throughput
cannot be larger than throughput of any one link of the path, path capacityCP 6 Lp
T ∗P.
It can be shown that if there is an odd cycle [38] in the link conflict graph, e.g. in Fig.
10–3, the equal sign in Equation (10.8) does not hold. Suppose transmission time of one
packet over all links are the same and is equal toT . It can be easily shown thatCP = 2Lp
7T<
Lp
3T, whereLp is the packet length and3T is theT ∗
P or the maximum value of interference
clique transmission time of all cliques of the path.
However, a large portion of paths found by routing algorithms have no odd cycles
when minimizing or maximizing some metrics, like the shortest hop routing algorithm.
Most of these paths may have a unique feature: if two links of a path conflict with each
other, all the links between them along the path conflict with both of them. We call these
paths asdirect routes, and other paths asdetour routes. For direct routes, the problem
239
A
BC
D
E
FG
H
1
2
3
4
56
7
12
3
4
56
7
Figure 10–3:A path with an odd cycle in the link conflict graph
to find all maximum clique can be simplified. To find all the maximum cliques including
one link, we only need to consider other links close to this one along the path. We refer to
these cliques aslocal interference cliqueof a path. For direct routes, the maximum value
of interference clique transmission time of all local cliques, orT ∗P , is equal to that for all
cliques, orT ∗P . Some polynomial algorithms can be designed to find all local cliques, which
is omitted in this chapter due to limited space.
For direct routes,CP = Lp
T ∗P= Lp
T ∗Pand the following simple scheduling can achieve
the path capacity:
• The first link or the source node schedules a transmission every otherT ∗P .
• Each link starts the transmission at the same time the upstream link finishes a trans-
mission.
It can be easily shown that there will be no conflicting links being scheduled to transmit at
the same time so that it is a feasible scheduling.
In this subsection, we define a new metric interference clique transmission time and
show it can more or less represent the path capacity. We will show later both metricsT ∗P
andT ∗P , i.e., the maximum value of interference clique transmission time of all cliques and
that of all local cliques, can be used as routing metrics to find paths with high throughput,
240
andT ∗P can be more easily computed thanT ∗
P . Apparently,
CP 6 Lp
T ∗P
6 Lp
T ∗P
(10.9)
10.3.3 Exact Path Capacity in Single Interference Model
Let the link conflict graph be constructed in the same way as in the above subsection.
Then we can find all the independent sets{E1, E2, E3, ..., Eα, ..., EM}, andEα ∈ P for all
1 6 α 6 M , whereP is the set of all links in the consider pathP , andM is the maximum
number of independent sets for the setP . Although it is a NP hard problem to find all
independent sets, some brute-force algorithm can finish in a reasonable time because the
number of links of a path in wireless networks is not large.
At any time, at most one independent set will be chosen to be scheduled to trans-
mit packets for all links in that set. Letλα > 0 denote the time share scheduled to the
independent setEα, and
∑16α6M
λα 6 1, λα > 0(1 6 α 6 M) (10.10)
Let Rα = {re, (all e ∈ P )} be a row vector of size|P |. re = 0 if e /∈ Eα. Otherwise,re is
the effective data rate over linke, defined in Equation(10.2).
Therefore,λαRα is a flow vector that the network can support in the time shareλα for
the independent setEα. We define a scheduleS as a frequency vectorS = (λα : (1 6 α 6
M)). For a given demand vector~f = {fe, (all e ∈ P )} ∈ R|P |, ~f is feasible if there exists
a scheduleS satisfying
~f =∑
16α6M
λαRα (10.11)
Path capacity is the maximum end-to-end throughput, which only counts the traffic
traveling through all links from the source to the destination, so
CP = max mine∈P
fe (10.12)
241
Now, we can format the path capacity problem as follows.
Maximizemine∈P
fe
Subject to:∑
16α6M λα 6 1∑
16α6M λαRα − ~f = 0
λα > 0(1 6 α 6 M)
(10.13)
It can be easily shown that the set of all feasible demand vectors is a convex set, and
given a feasible demand vector~f = {fe, (all e ∈ P )}, the new vector~f ∗ = mine∈P
fe(1, 1, ..., 1)
= mine∈P
feI is also feasible, whereI is the all-one vector inR|P |. Thus the Problem (10.13)
can be converted to a linear programming problem:
Maximizefe
Subject to:∑
16α6M λα 6 1∑
16α6M λαRα − feI = 0
λα > 0(1 6 α 6 M), fe > 0
(10.14)
Now we can interpret the scheduleS as the following link scheduling for a given path.
The time axis is divided into slots of durationτ . Each time slot is partitioned into a set of
subslots indexed byα(1 6 α 6 M), such that theαth subslot has a length ofλατ seconds.
In theαth subslot, all links in the setEα will be scheduled to transmit. Thus, during each
time slot of lengthτ , the throughputfe over link e is
fe =1
τ
∑α
λατRα(e) =∑
α
λαRα(e) (10.15)
Since in the solution of Problem (10.14) fe is the same for all links, the path capacity is
equal tomine∈P
fe = fe.
242
10.3.4 Path Capacity in Multi-Interference Model with Variable Link Rate
In above two subsections, we only consider interference one by one, and link rate is
determined by the receiver sensitivity. In this subsection, we will consider the aggregate
effect of all existing interferences on transmissions, and the link rate is determined not only
by the receiver sensitivity but also by the interference level contributed by all surrounding
transmissions.
In the multi-interference model, link conflict graph is a weighted graph and weightwij
between linki andj is defined in Equation (10.4). Independent sets will be significantly
different from those obtain in the single-interference model, and the highest achievable link
rate of each link may be also different when the link is in different independent sets due to
different interference level.
Given a set of linksEα, the interference level at each link is determined since we
assume each user uses a predefined transmission power. When all links inEα are scheduled
to transmit at the same time, SNR at linkLi in Eα is
SINRiα =
Prii
PN+P
{j:Lj∈Eα\{Li}}Prji
(multi-interference)
minj
Prii
PN+Prji(single-interference)
(10.16)
wherePrii is the received power level of the intended signal at linkLi, andPrij for all
Lj ∈ Eα \ {Li} is the received interference power at linkLi from the transmission at link
Lj. If two different links Li andLj have a common node, we setPrji = Prij = ∞because one node cannot transmit and receive at the same time. Notice that if bidirectional
transmission is adopted,Prij can be interference level of either DATA transmission or
ACK transmission, and we also need to check whether the SNR requirement for receiving
both DATA and ACK frames is satisfied or not at Linki.
If there is a link whose SNR is less than the requirement of the lowest link rate, then
transmission over that link cannot be scheduled at the same time with other links inEα,
andEα is not an independent set. Otherwise,Eα is an independent set. For an independent
243
setEα, link rate of each link inEα will be selected as the highest possible channel rate
satisfying both requirements of receiver sensitivity and SNR.
According to the above description of independent sets, we can use some brute-fore
algorithms to find all independent sets and determine the link rates for links in them for one
path. Then the same method in the previous section can be used to derive the path capacity
of any given path.
10.3.5 Extended to Multiple Paths between a Source and Its Destination or betweenMultiple Pairs of Source and Destination
Given k pathsP1, P2, ...,PK between the source nodeS and the destination nodeD,
let fk denote the path throughput of thekth path.
Let P =⋃i
Pi, find all independent setsEα (1 6 α 6 M) and calculateRα for each
Eα of P . Let I(Pk) is an row indicator vector inR|P |, and
Ie(Pk) =
1, if e ∈ Pk
0, if e ∈ P − Pk
(10.17)
Then the problem to find the maximum aggregate throughput over all the paths can be
formatted as
Maximize∑
16k6K
fk
Subject to:∑
16α6M λα 6 1∑
16α6M λαRα −∑k
fkI(Pk) = 0
λα > 0(1 6 α 6 M), fk > 0(1 6 k 6 K)
(10.18)
If k pathsP1, P2, ..., PK belong tok pairs of source and destination, the problem
formulation is the same if we want to maximize the aggregate throughput of all source-
destination pairs. If fairness is considered, some other objective functions (or concave
utility functions) can be used [101].
244
10.3.6 Consider the packet error rate over each link in the link scheduling algorithm
If we know the packet error ratepei over each linkLi, to find the path capacity, we
only need to modify the link rate vectorRα in the above problem formulation, and
R′α = RαDiag{(1− pe1), (1− pe2), ..., (1− pe|P |)} (10.19)
whereDiag{(1 − pe1), (1 − pe2), ..., (1 − pe|P |)} is a diagonal matrix with(1 − pei)
(1 6 i 6 |P |) on the diagonal.
The interference clique transmission timeTC becomes expected interference clique
transmission timeT ′C , and
T ′C =
∑
l∈C
Tl
1− pel
(10.20)
T ∗P andT ∗
P defined in Section10.3.2should also be recalculated accordingly.
10.4 Path Selection in Wireless Ad Hoc Networks
In this section, we study how to select a good path with high bandwidth by using
various routing metrics. First, we formulate an linear/integer programming optimization
problem to find the best possible path to achieve the maximum end-to-end throughput or
path capacity. Though it is a centralized algorithm, this provides a supremum of capacity
of all paths found by any distributed routing algorithms and makes it possible to evaluate
how close path capacity found by different routing metrics is to the maximum. Then we
propose several heuristic routing algorithms to utilize various routing metrics to find a good
path, including the expected interference clique transmission time. The new routing metric
keeps in mind the multirate capability and interference, which previously proposed routing
metrics ignore or only consider one of them, and hence may obtain significant performance
gain.
245
10.4.1 Optimal Path Selection
Maximization of the end-to-end throughput between a source and a destination is a
max-flow problem, which can be formulated as a linear programming problem as follows:
Maximizev
Subject to:
∑{j:(i,j)∈E}
xij −∑
{j:(j,i)∈E}xji =
v i = s,
0 i ∈ N\{s, t}−v i = t
xij > 0, ((i, j) ∈ E)∑
16α6M λαRα − ~f = 0∑
16α6M λα 6 1, λα > 0
(10.21)
wherexij is the flow from nodei to nodej over linkLij, ~f is the flow demand vector and
~f = {xij + xji, ((i, j) ∈ E, i < j)}. The first two rows are standard formulation of a max-
flow problem. The last two rows are feasibility conditions of the flow vector that considers
the wireless interference as well as the multirate capability, and they replace the original
condition that flow over each link is less than or equal to the link capacity. Normally, the
solution of this problem will utilize multiple paths between the source and the destination.
246
In this chapter, we focus on the unicast and single-path routing algorithm. Therefore
we need to modify the above problem into a single-path problem as follows:
Maximizev
Subject to:
∑{j:(i,j)∈E}
xij −∑
{j:(j,i)∈E}xji =
v i = s,
0 i ∈ N\{s, t}−v i = t
0 6 xij 6 Capij · zij, ((i, j) ∈ E)∑
{j:(i,j)∈E} zij 6 1, zij ∈ {0, 1}∑
16α6M λαRα − ~f = 0∑
16α6M λα 6 1, λα > 0
(10.22)
whereCapij is the maximum achievable link rate over linkLij. The first three rows specify
that there is only one path between the source and the destination. The links along that path
have the same flow and all other links have zero flow. This problem is an mixed integer-
linear problem.
10.4.2 Using Routing Metrics in Path Selection
There are already a lot of different routing metrics for ad hoc networks as discussed
in Section I, including hop count, end-to-end transmission delay (or medium time), link
rate, and bandwidth-distance product (BDiP). We also propose a new routing metric, i.e.,
interference clique transmission time (CTT) in Section10.3.2. To reduce the computation
time, local interference clique transmission time (LCTT) can be used.
If packet loss rate is considered, they become expected transmission count (ETX),
expected end-to-end transmission delay, expected link rate, expected BDiP, expected CTT,
and expected LCTT. To use these routing metrics, we should find paths to minimize ETX,
expected end-to-end transmission delay, expected CTT, or expected LCTT; or to maximize
expected link rate and expected BDiP. Thereafter, we refer to the routing algorithms using
them as min-hop, min-delay, max-rate, max-BDiP, min-CTT and min-LCTT, respectively.
247
Specifically, a min-hop routing algorithm finds the path with the smallest hop count or
ETX; a min-delay routing algorithm finds the path with the shortest (expected) end-to-end
transmission delay. A max-rate or max-BDiP routing algorithm finds a path which has the
widest bottleneck link, where bottleneck link of a path is defined as the link with the lowest
(expected) link rate or smallest value of (expected) BDiP among all the links of that path.
A min-CTT or min-LCTT routing algorithm finds a path which has the smallest value of
bottleneck clique, where bottleneck clique of a path is defined as the clique with the largest
value of (expected) CTT or LCTT among all cliques or local cliques of that path.
Among these routing metrics, hop count and end-to-end transmission delay are end-
to-end additive routing metrics. Bellman-Ford algorithm can be used to minimize them.
Other routing metrics can be used with some widest path routing algorithm. Bellman-Ford
algorithm can be also used for this purpose because it is well suited to computation of a
matrix with the maximum bandwidth or largest/smallest value of other metrics for a given
number of hops [5].
These routing metrics can be also used in some distributed routing algorithms, such
as AODV and DSR. When a node overhears a repeated routing request message, it only
forwards or rebroadcasts the request message when the recalculated routing metric of the
path that the received request message travels through has a better value than that for the
previously received request message, such as a smaller hop count.
10.5 Performance Evaluation
In this section, we use Matlab to evaluate the performance of various routing metrics
in finding good paths in terms of path capacity, and investigate which metric finds paths
with larger path capacity and how close path capacity of founded paths is to the optimum
value.
10.5.1 Simulation Setup
In the simulations, there areN nodes randomly distributed in the network. The chan-
nel rates 54, 18, 11, 6 and 1Mbps are studied, and their transmission radii are 76, 183, 304,
248
396, 610m [32], respectively. As discussed in the paper [155], 802.11 system have very
close interference ranges and optimum carrier sensing ranges for different channel rates,
so we use a single interference range 900m for all channel rates for simplicity. That is to
say, as long as two nodes are at least 900m away from each other, transmission from one
node does not interfere with the receiving at the other. The data packet size is 1000bytes.
The IEEE 802.11b/g protocol parameters are adopted to calculate the effective data rate at
each link. 1 and 11 Mbps are 802.11b rates and 6, 18 and 54 are 802.11g rates. Two-way
handshake DATA/ACK is used. Both DATA and ACK rates are transmitted with the same
link rate.
We fix the node nearest to the upper left corner as the source, and find the paths from
it to all other nodes. Therefore, there are totalN − 1 different source-destination pairs
or paths considered in the evaluation. We compare seven routing algorithms consisting
of optimum, min-hop, min-delay, max-band, max-BDP, min-CTT and min-LCTT routing
algorithms. Here, optimum represents the mixed integer-linear problem (10.22) that found
the path with largest path capacity. The performance metrics are path capacity. Paths are
computed using these routing algorithms and path capacity of these paths is computed by
solving the linear programming problem defined in Equation (10.14).
10.5.2 Compared with Optimal Routing
The optimal routing algorithm is formulated as a mixed integer-linear problem as in
Equation (10.22). Normally it is a NP hard problem. Therefore, we can only solve the
problem for a small topology in a reasonable time. In this set of simulation, 25 nodes are
randomly distributed in a 200m X 2500m topology.
Figure10–4shows the path capacity of paths found by different routing algorithms.
We can observe that min-CTT and min-LCTT routing algorithms can always find the path
with a path capacity equal to the optimal value in this topology. Min-delay routing algo-
rithms can found the path with optimum path capacity when the source-destination distance
is not large. However it fails to do so when the source-destination distance is large although
249
2 5 10 15 20 250
2
4
6
8
10
12
14
16
18
node ID (in order of distance from the source)
Pat
h C
apac
ity (
Mbp
s)
min-hopmin-delaymax-ratemax-BDiPmin-LCTTmin-CTToptimal
Figure 10–4:Path capacity for different routing algorithms
it finds a value close to the optimum value. Max-rate and max-BDiP may not be able to
find a path with the optimum path capacity whether the source-destination distance is large
or small. In addition, min-hop routing algorithm has a much worse performance to find a
path with a high throughput than all other routing algorithms because it does not consider
the multirate capability of the wireless nodes.
10.5.3 Performance Evaluation of Six Routing Metrics in a Larger Topology
In this set of simulation, 400 nodes are randomly distributed in a 1500X 300m topol-
ogy. To obtain a better vision effect, we only show results of 26 random pairs of source-
destinations. All other pairs have the similar results.
Fig. 10–5shows path capacity of paths selected by different routing algorithms. First,
min-hop routing algorithm has a much worse performance than all other algorithms. Sec-
ond, min-CTT always finds a path which has a largest path capacity among paths found
by all algorithms. Third, min-LCTT almost has the same performance as min-CTT for
all pairs of source-destinations. Fourth, min-delay routing algorithm can only find a path
with a capacity equal to that found by min-CTT when the source-destination distance is
less than 2000 meters, and the path capacity is10% less than that found by min-CTT or
min-LCTT otherwise. Furthermore, max-rate and max-BDiP routing algorithms can find
250
0 50 100 150 200 250 300 350 4000
1
2
3
4
5
6
7
node ID (in order of distance from the source)
Pat
h C
apac
ity (
Mbp
s)
min-hopmin-delaymax-ratemax-BDiPmin-LCTTmin-CTT
Figure 10–5:Path capacity for different routing algorithms
paths with capacity several times of that found by min-hop algorithms, but up to 60% less
than that found by min-CTT and min-LCTT routing algorithms.
Fig. 10–6shows the hop count of paths found by these routing algorithms. Apparently,
min-hop routing algorithm finds the path with smallest hop count. Max-rate and max-BDiP
routing algorithms often find paths with a very large hop count. Min-delay, min-CTT, and
min-LCTT routing algorithms find paths with similar hop counts.
Fig. 10–7shows the source-destination distance for all the source-destination pairs.
This distance ranges from about 0m to 3000m. It is meaningful when considered with other
figures. For example, when the source-destination distance is larger than 2000m, min-hop
routing algorithm finds paths with 4 or more hops, min-delay, min-CTT and min-LCTT
routing algorithms finds paths with 7 or more hops, and min-CTT and min-LCTT find
paths with capacity significantly larger than that found by other routing algorithms.
Fig. 10–8shows the solving time of the path capacity problem defined in Equation
(10.14) for all paths found by these routing algorithms. Since this problem requires the
251
0 50 100 150 200 250 300 350 4000
5
10
15
20
25
30
node ID (in order of distance from the source)
path
leng
th (
hop
coun
t)
min-hopmin-delaymax-ratemax-BDiPmin-LCTTmin-CTT
Figure 10–6:Path lengths for different routing algorithms
0 50 100 150 200 250 300 350 4000
1000
2000
3000
4000
node ID (in order of distance from the source)
sour
ce-d
estin
atio
n di
stan
ce (
m)
Figure 10–7:Source-destination distance
252
Figure 10–8:Path capacity solving time
information of all the independent sets, the solving time also includes the time to find all
the independent sets for all the links of the considered path. Each point shows a solving
time for one path. We can observe that the solving time almost linearly increases with the
number of hop count of paths. It illustrates that the path capacity problem can be solved in
a short time when the hop count is less than 22.
Table10–2shows the path finding time and the path capacity solving time for all the
routing algorithms. The values in the table are aggregate values for all 399 paths. We can
observe that max-CTT has a much larger value of path finding time because there is no
polynomial algorithm to calculate CTT. Other routing algorithms have a reasonable path
finding time. Path capacity solving time is approximately linear to the hop count which is
shown in Fig.10–6.
10.5.4 Path Capacity of a Single-Rate Network
In this subsection, we illustrate that if an appropriate routing metric is used, better
end-to-end throughput can be achieved by allowing multiple rates at each node, which may
253
Table 10–2:Run time of different routing algorithms
Algorithm Path finding time(s) Path Capacity Solving time(s)min-hop 1.9840 85.7190min-delay 10.8280 140.1250max-rate 4.2030 275.6880max-BDiP 12.0160 201.6710min-maxLCTT 24.8750 155.2660min-maxCTT 289.3130 164.8590
not be the case when hop count is used as the routing metric [84]. The topology is the same
as that in the above subsection. Min-CTT routing algorithm is used because it can always
find a path with a higher throughput. Only a single link rate, 1, 6, 11, 18, or 54 Mbps,
is allowed in the single-rate scenario. We compare the results from single-rate scenarios
with the scenario where all these five link rates are allowed. Notice that in the single-rate
scenarios, a scenario using a lower link rate has more links in the network because a lower
link rate has a larger transmission range.
Fig. 10–9shows the path capacity found for all these scenarios. Apparently, much
larger path capacity can be found in the multirate scenario than all the single-rate scenarios.
Notice that, if only 54 Mbps is allowed in the network, the network is partitioned into many
parts and there is often no feasible path between a source and its destination. Therefore,
path capacity is equal to zero for the scenario with 54 Mbps in this topology.
10.6 Conclusions
In this chapter, we first investigate the impact of multirate capability and interference
on the path capacity, and formulate a linear problem to solve the path capacity of a given
path in a wireless multirate and multihop ad hoc network. A new routing metricinter-
ference clique transmission timeis proposed to find a path with a higher throughput than
previously proposed routing metrics. A joint routing and MAC scheduling problem is also
formulated to address the impact of multirate and interference in a wireless multirate and
multihop ad hoc network, which provides a supreme of path capacity of paths found by all
routing algorithms. The routing metrics, interference clique transmission time, hop count,
254
0 50 100 150 200 250 300 350 4000
1
2
3
4
5
6
7
node ID (in order of distance from the source)
path
cap
acity
(M
bps)
1Mbps6Mbps11Mbps18Mbps54Mbpsmulti-rate
Figure 10–9:Path capacity for a single rate network
link rate, end-to-end transmission delay, and bandwidth distance product, are evaluated in
a random topology. The results demonstrate that interference clique transmission time is
the best routing metric to find a path with much higher path capacity than other routing
metrics. It also finds paths with path capacity equal to the optimum one found by the joint
optimization problem in the simulated topology.
CHAPTER 11DISTRIBUTED FLOW CONTROL AND MEDIUM ACCESS CONTROL IN MOBILE
AD HOC NETWORKS
In wireless multihop ad hoc networks, nodes need to contend for the shared wireless
channel with their neighbors. This can result in severe congestion, packet loss and long
end-to-end delay, and hence offer a great challenge to streaming, real-time, routing traffic
as well as TCP traffic. Different from the problems in the wired networks, these problems
mainly result from the close interactions between the medium access control (MAC) layer
and higher layers, and require efficient cross-layer designs. In this chapter, we present a
framework of distributed flow control and medium access to mitigate the MAC layer con-
tentions, overcome the congestion, and increase the end-to-end throughput for traffic flows
across shared channel environments. The key idea is based on the observation that, in the
IEEE 802.11 MAC protocol, the maximum throughput for chain topology is1/4 of the
channel bandwidth and its optimum packet scheduling is to allow simultaneous transmis-
sions at nodes which are four hops away. The proposed fully distributed scheme general-
izes this optimum scheduling to any traffic flow which may encounter intra-flow contention
and inter-flow contention. Extensive simulations illustrate that the proposed scheme well
controls congestion and greatly alleviates medium collisions. It achieves much better and
stabler performance than the IEEE 802.11 MAC protocol in terms of throughput, delay,
fairness and scalability with low and stable control overhead.
11.1 Introduction
In wireless multihop ad hoc networks, nodes have to cooperate to forward each other’s
packets through the networks. Due to the contention for the shared channel, the throughput
of each single node is limited not only by the channel capacity, but also by the transmissions
in its neighborhood. Thus, each multi-hop flow encounters contentions not only from other
255
256
flows which pass by the neighborhood, i.e., theinter-flow contention, but also from the
transmissions of the flow itself because the transmission at each hop has to contend for the
channel with the upstream and downstream nodes, i.e., theintra-flow contention.
These two kinds of flow contentions could result in severe collisions and congestion,
and significantly limit the performance of ad hoc networks. It has been shown in many
papers that multihop ad hoc networks perform poorly with TCP traffic as well as heavy
UDP traffic ([91, 19, 108, 140, 46, 162, 147]). The MAC protocol itself could not solve
the congestion problem and often aggravates the congestion due to the contentions in the
shared channel. Fang and McDonald [42] studied how the throughput and delay can be
affected by the path coupling, i.e., the MAC layer contention between nodes distributed
along node disjoint paths, say, inter-flow contention. The results demonstrated the need for
the control of cross-layer interactions and methodologies for cross-layer optimization.
To the best of our knowledge, there are no comprehensive study on and good solutions
to the congestion control considering the MAC layer contentions and the packet scheduling
of multihop traffic flows along their selected paths in the shared channel environment. In
this chapter, we present a framework of network layer flow control and MAC layer medium
access to address the collisions and congestion problem due to theintra-flow contentionand
inter-flow contention. Based on the framework, a multihop packet scheduling algorithm is
incorporated into the IEEE 802.11 Distributed Coordination Function (DCF) protocol [68].
The salient feature here is to generalize the optimum packet scheduling for chain topology
to any traffic flows in general topology.
The framework includes multiple mechanisms: fast relay, backward-pressure conges-
tion control, receiver-initiated transmission scheduling, queue space limitation, and Round
Robin scheduling. Thefast relayassigns high priority of channel access to the down-
stream nodes when they receive packets, which can reduce a lot of intra-flow contentions.
Thebackward-pressure congestion controlgives transmission opportunity to the congested
node while keeping its upstream nodes from transmissions. This could not only reduce a lot
257
of contentions in the congested area, but also quickly eliminate the congestion. It is also a
quick method to notify the source to slow the sending rate down by exploiting the RTS/CTS
of the IEEE 802.11 MAC protocol. Thereceiver-initiated transmission scheduling scheme
uses a three-way handshake to resume the blocked flow at the upstream nodes when the
congestion is cleared. It is a timely and economical approach with even less control over-
head than the normal four-way handshake transmission in the IEEE 802.11 protocol. The
queue space limitationfor each flow prevents the irresponsible application as well as the
congested flows from occupying the whole queue space and leaves the resource for other
responsible applications instead of the congested flows. TheRound Robin schedulingis
adopted in the queue management to further address the unfairness problem due to greedy
sources.
Thus, altogether all above mechanisms provide a framework of distributed flow con-
trol and medium access control designed to reduce the MAC layer contentions and elimi-
nate the congestion. Our contribution is to devise these mechanisms for the shared channel
environment in the multihop ad hoc networks, and incorporate them into the IEEE 802.11
DCF protocol. Extensive simulation studies are carried out to validate their performance.
It turns out that our scheme could maintain stable performance with high throughput in-
dependent of traffic status, and improve the aggregated throughput by up to more than
12 times especially for the multihop flows under heavy traffic load. At the same time, it
also improves the fairness among flows in terms of end-to-end throughput, and has much
shorter delay and much lower control overhead compared to IEEE 802.11 DCF protocol.
Moreover, it is scalable for large networks where there are more multihop flows with longer
paths.
The rest of this chapter is organized as follows. Section11.2 details the impact of
MAC layer contentions on traffic flows and the resulting problems. Section11.3introduces
our scheme and the implementation based on the IEEE 802.11 DCF protocol. Section11.4
258
evaluates the performance of our scheme through simulation. The related work is discussed
in Section11.5. Finally, we conclude the chapter in Section11.6.
11.2 Impact of MAC Layer Contentions on Traffic Flows
Different from the wired networks where the links are independent of each other,
the wireless links share the same channel resource. Mobile nodes rely on MAC layer to
coordinate the channel access. The close interactions between the MAC layer and traffic
flows offer great challenges to congestion control as well as medium access coordination.
In this section, we characterize the interactions as intra-flow and inter-flow contentions and
discuss their impacts on the end-to-end performance of traffic flows as well as the MAC
layer performance.
The intra-flow contentiondiscussed here is the MAC layer contentions for the shared
channel among nodes of the same flow, which are in each other’s interference range. Li
et al. has observed that the IEEE 802.11 fails to achieve the optimum chain scheduling
[91]. Nodes in a chain experience different amount of competitions as shown in Fig.11–
1(a), where the small circle denotes a node’s valid transmission range, and the large circle
denotes a node’s interference range. Thus the transmission of node 0 in a 7-node chain
experiences interference from three subsequent nodes, while the transmission of node 2 is
interfered by five other nodes. This means that node 0, i.e., the source, could actually inject
more packets into the chain than what the subsequent nodes can forward. These packets are
eventually dropped at the subsequent nodes. We call this problem asintra-flow contention
problem.
In addition to the above contentions inside a multi-hop flow, the contentions between
flows could also seriously decrease the end-to-end throughput. If two or more flows pass
through the same region, the forwarding nodes of each flow encounter contentions not only
from its own flow but also from other flows. Thus the previous hops of these flows could
actually inject more packets into the region than what the nodes in the region can forward.
These packets are eventually dropped by the congested nodes. As shown in Fig.11–1(b),
259
0 1 2 3 4 5 6
(a)
0
12
3
45
67
89
1011
12
(b)
Figure 11–1:Chain topology and cross topology
where there are two flows, one is from 0 to 6 and the other is from 7 to 12. Obviously, node
3 encounters the most frequent contentions and has few chances to successfully transmit
packets to its downstream nodes. The packets will accumulate at and be dropped by node
3, 9, 2, 8 and 1. We call this problem as theinter-flow contention problem.
These two problems are very common and have unique features in multihop ad hoc
networks. First, packet forwarding at each hop has to contend for channel resource with
other traffic in the neighborhood. Second, inter-flow contention not only appears when
several flows pass through the same forwarding node, but also exists when the flows’ paths
are close to each other such that the MAC layer only allows one transmission at a time to
avoid collisions. Third, once the congestion occurs, MAC layer contentions become severe
so that the MAC layer throughput decreases due to the increasing collision probability
([15, 154, 160]). This does not help to solve the congestion and instead results in more
packets accumulating in the queue.
It is easy to illustrate why traditional congestion control schemes, such as TCP, and
heavy UDP traffic have poor performance in ad hoc networks if we consider the above
two problems. TCP cannot respond the congestion in time and often decreases the sending
window a long time after the congestion occurs since it depends on the end-to-end feed-
back and timeouts to conduct congestion control. Fig.11–2(a) demonstrates that TCP
traffic introduces a great number of packet collisions. Fig.11–2(b) illustrates more detail
260
2 4 6 8 10200
250
300
Number of TCP flows2 4 6 8 10
0
1
2
3
collided ACK/s dropped pkts/s
collided RTS/s
0 1 2 3 4 5 6 7 80
2
4
6
8
Node ID
Ave
rage
que
ue le
ngth
(a) Collided packets (b) Queue length with 6 TCP flows
Figure 11–2:TCP performance in a 9-node chain topology
why this could happen. Actually, as discussed in the previous paragraphs, node 2, 3, and
4 in a 9-node chain encounter more medium contention than node 0 and 1, thus packets
accumulate at these nodes, and keep contending for channel access. This results in severe
medium collision and a lot of dropped packets. TCP acknowledgements are delayed and
even dropped not only due to the increased MAC layer collision probability but also for
the increased queue length (notice that each node only has one shared outgoing link and a
corresponding queue for all outgoing packets). We find in the simulations that TCP source
often detects the congestion through the sender’s time-out events instead of duplicated ac-
knowledgements. This apparently degrades the performance of congestion control greatly.
Here, the simulation settings are the same as those of Section11.4.1, and different number
of TCP flows travel from node 0 to node 8 in a 9-node chain topology. Similarly, since
UDP traffic has no congestion control, it results in severer congestion and introduces lots
of packet collisions, and hence both end-to-end throughput and delay degrade significantly
which will be illustrated in later simulation results.
Therefore we argue that a good solution to the flow and congestion control problem in
ad hoc networks must consider the MAC layer characteristics and respond quickly to the
congestion. An intuitive solution to the above problems is to allow the downstream nodes
and the congested ones to obtain the channel access to transmit packets while keeping oth-
ers silent, and hence smoothly forward each packet to the destination without encountering
261
severe collisions or excessive delay at the forwarding nodes. This motivates us to develop
our scheme presented in the next section.
11.3 OPET: Optimum Packet Scheduling for Each Traffic Flow
11.3.1 Overview
The objective of our scheme is to approximateOptimumPacket scheduling forEach
Traffic flow (OPET). Optimum here means that our scheme can achieve optimum packet
scheduling for each single traffic flow, which is obtained from the optimal scheduling for
chain topology. By solving theintra-flow contentionand inter-flow contentionproblems,
our scheme OPET can significantly reduce the resource wasted by those dropped packets
at forwarding nodes and thus could significantly improve the end-to-end performance.
OPET includes four major mechanisms. The first one is to assign a high priority
of channel access to the current receiver. This could achieve optimum packet scheduling
for chain topology and avoid severe intra-flow contentions in each flow. The second one
is the hop-by-hop backward-pressure scheduling. The forwarding nodes as well as the
source are notified of the congestion and then are restrained from sending more packets
to their next hops. This efficiently reduces the MAC layer contentions due to theintra-
flow contentionandinter-flow contentionon those congested nodes by keeping other nodes
silent. The third one is not to allow the source node to occupy the whole outgoing queue,
which could efficiently prevent the irresponsible applications from injecting more packets
than the network could handle, and leave more queue space for other flows passing through
this node. The last one is the Round Robin scheduling for the queue management, which
further alleviates the unfairness problem between traversing flows and greedy source flows.
11.3.2 Rule 1: Assigning High Channel Access Priority to Receivers
In each multi-hop flow, the intermediate node on a path needs to contend for the
shared channel with the previous nodes when forwarding the received packet to the next
hop. One way to avoid the first few nodes on the path to inject more packets than what the
succeeding nodes can forward is to assign high channel access priority to each node when
262
it just receives a packet. That is to say, the source node tries to hold the succeeding packets
until the preceding packet is transmitted out of its interference range. This can achieve
optimum scheduling for one way traffic in the regular chain topology.
For example, in Fig.11–1(a), node 1 has the highest priority when it receives one
packet from node 0 and then forwards the packet to node 2. Node 2 immediately forwards
the received packet from node 1 and forwards it to node 3. It is the same for node 3, which
immediately forwards the received packet to node 4. Because node 0 can sense the trans-
missions of node 1 and 2, it will not interfere with these two nodes. Node 0 could not
send packets to node 1 either when node 3 forwards packet to 4 because node 1 is in the
interference range of node 3. When node 4 forwards packet to 5, node 0 could have chance
to send packet to node 1. The similar procedures are adopted by the succeeding nodes
along the path. Node 0 and 4 could simultaneously send packets to their next hops, and
similar case happens to nodes which are 4 hops away from each other along the path. Thus,
the procedure could utilize 1/4 of the channel bandwidth, the maximum throughput which
can be achieved by the chain topology [91]. For a more random path, it is possible for
more than 4 hops to interfere with the first hop transmission, so the maximum throughput
is less than 1/4 of the channel bandwidth. OPET, however, can still reduce lots of colli-
sions and approach the maximum throughput that the topology can achieve by allowing the
downstream nodes to access the channel with higher priority.
To incorporate this procedure into the IEEE 802.11 DCF protocol, one solution is to
assign higher channel access priorities to those packets which have traversed more hops. It
requires that the MAC layer supports many different priority levels, and needs the hop count
information from the routing protocol. Currently, we opt for a simpler implementation
which sets the initial value of the backoff window size of each receiver at 8 (i.e., whenever
a node receives a packet, its backoff window is set to 8). When it finishes the transmission,
the scheme resets its contention window size to the normal value 32 [68]. The example in
Fig. 11–3shows the optimum packet scheduling for the chain topology implemented by
263
0 1 2 3 4 5 6 7 8t0t1t2t3t4t5t6t7
t3
t8t9t10
t
00
00
00
00
11
11
11
1
22
21
232t12
t11
n : The transmission of the nth packet
Figure 11–3:Optimum packet scheduling for chain topology
our scheme. Notice that node 1, 2, and 3 cannot become receivers at the same time due
to the shared channel, and node 0 has low channel access priority and can rarely send the
succeeding packets before the preceding packets have been forwarded to node 4.
Rule 1 only considers the interference in a single flow. If the next hop of the current
receiver is busy or interfered by other transmission, the receiver cannot seize the channel
even with the highest priority. So we introduce the backward-pressure scheduling to deal
with the inter-flow contention.
11.3.3 Rule 2: Backward-Pressure Scheduling
Similar with the above mechanism which keeps the source and upstream nodes from
overloading the downstream nodes, the basic idea of backward-pressure scheduling is to
keep nodes from transmitting to their already congested downstream nodes, and hence yield
the channel access to the congested nodes to clear congestion as well as to avoid severe
medium contention. To avoid both severe congestion and medium collision, the mechanism
detects an early sign of congestion for each flow, i.e., when the downstream node has
enough packets of the flow to make full use of the channel bandwidth, and accordingly
starts corresponding procedures.
The mechanism includes a transmission blocking procedure and a transmission re-
suming procedure. It requires that each node monitors the number of packets of individual
264
flow in the shared outgoing queue. Letni denote the number of packets of flowi. If ni
reaches abackward-pressure threshold, the transmission of flowi from its upstream node
will be blocked, and the upstream node is referred as arestricted nodeof flow i in the fol-
lowing discussions. When the node successfully forwards some packets to its downstream
node so thatni is less than the backward-pressure threshold, it initiates the transmission
resuming procedure to allow the restricted node to transmit packets of flowi.
Our scheme OPET sets thebackward-pressure thresholdas one, which indicates the
upper limit of number of packets for each flow at each intermediate node. The smaller
the value is, the less the medium contention. And one is large enough to be able to make
full use of the channel bandwidth and is simple to be implemented. Notice that in ad hoc
networks, the wireless channel is shared by all the nodes in the same neighborhood. At
any one time, at most one node can successfully access the channel and at most one packet
can be successfully transmitted and received. Therefore, at all the nodes which are in the
interference range of each other, if the total number of backlogged packets is equal to or
larger than 1 at any time, the channel bandwidth will not be wasted due to idle period.
For example, in a chain topology with more than 3 hops, the optimum chain throughput in
the IEEE 802.11 MAC protocol is 1/4 of the chain bandwidth and therefore the optimum
threshold for the backward-pressure objective is 1/4. Considering other contending traffic
in the neighborhood, this number should be smaller to minimize the medium contention as
well as to make full use of the channel bandwidth. Since fractional threshold is difficult to
be implemented, we opt for the nearest integer 1 as the value of this threshold.
The transmission blocking procedure takes advantage of the RTS/CTS exchange in
the IEEE 802.11 MAC protocol to restrict the transmission from the upstream nodes. A
negative CTS (NCTS) should respond the RTS when the intended receiver has reached
the backward-pressure thresholdfor the corresponding flow. To uniquely identify each
flow, RTS for the multi-hop flows (RTSM) should include two more fields than RTS, i.e.,
the source address and the flow ID. RTS for the last hop transmission is not necessary to
265
include these two fields, because its intended receiver is the destination of the flow which
should not limit its previous hop from sending packets to itself. The NCTS packet has the
same format as CTS except the different value in the frame type field. The format of RTSM
is shown in Fig.11–4.
The transmission resuming procedure adopts the receiver-initiated transmission. It
uses three-way handshake CTS/ DATA/ ACK instead of the normal four-way handshake
RTS/ CTS/ DATA/ ACK, because the downstream node already knows the restricted node
has packets destined to it. The CTS to resume the transmission (CTSR) should include
two more fields than CTS, the source address and the flow ID, to uniquely specify the flow
as shown in Fig.11–4. CTSR as well as CTS has no information about its transmitter as
that in RTS. The two fields, i.e., the source address and the flow ID, are used to uniquely
specify the next hop that the flow should pass through; hence we assign different flow
IDs to the flows from the same application but with different path if multipath routing is
used. The procedure of transmitting CTSR is similar to that of RTS and allows multiple
retransmissions before dropping it. Different message sequences at different situations are
shown in Fig.11–6.
The transmission resuming procedure also employs a complementary mechanism, i.e.,
resuming transmission by the upstream node itself. We notice that the mobility in ad hoc
networks could result in link breakage followed by the transmission failure of CTSR. And
CTSR may also be collided for several times and be dropped. The restricted node should
start a timer, i.e., theflow-delay timer, and begin retransmission if its intended receiver
has not sent CTSR back in a long period, which we set one second in our scheme. If the
timeout value is too large, the blocked flow may be blocked for a very long time if CTSR is
failed. If it is too small, the transmission of the blocked flow may be resumed earlier than
the time when the downstream node eliminates the congestion. One second is a tradeoff
between them.
266
In the backward-pressure scheduling scheme, each node needs to maintain a table,
i.e., flow-table, to record the information of the flows which currently have packets in the
outgoing queue. A table item is created when a flow has the first packet in the outgoing
queue, and will be deleted when all the packets of the flow have been forwarded to the
downstream node. Thus the maximum size of the table is the queue size if all packets in
the queue belong to different flows and the queue is full. The flow information of each table
item includes thesource-address, flow-ID, number-of-packetsin the queue,restriction-flag,
restriction-start-time upstream-node-address, andblock-flag. The restriction-flag indicates
whether the node is not allowed to forward the packet of this flow to the downstream node
and the restriction-start-time indicates when the restriction starts. The block-flag indicates
whether the transmission of the upstream node is blocked. And the algorithm for backward-
pressure scheme is shown in Fig.11–5.
FrameControl
DurationReceiverAddress
TransmitterAddress
FCS
Octets: 2 2 6 6
SourceAddress
FlowID
4 44
FrameControl
DurationReceiverAddress
FCS
Octets: 2 2 6
SourceAddress
FlowID
4 44
RTSM frame
CTSR frame
Figure 11–4:The packet format of RTSM and CTSR
One simple example to illustrate how our scheme works is shown in Fig.11–7(a)and
Fig. 11–7(b). When congestion occurs at node 4 and node 4 could not forward packet 0 to
its downstream node 5 as shown in Fig.11–7(a), the flow along the chain will accumulate
one packet at each node from node 1 to node 4 and then prevent the nodes 0, 1, 2 and 3
from contending for the channel to reduce the contention to the congested node 4. After
eliminating the congestion at node 4, the transmission will be resumed by the congested
node as shown in Fig.11–7(b). Notice that in a random topology, the congestion can result
from the interference or contention from any crossing and/or neighboring flows such that
267
Algorithm 1 Backward-Pressure SchemeRecvRTSM(Packetp)1: number-of-packets=CheckFlowTable(p)2: if number-of-packets> backward-pressure-threshold3: TransmitNCTS()4: SetFlowTalbe(p, block-flag=1)5: else6: TransmitCTS()7: end ifRecvNCTS(Packetp)1: SetFlowTable(p,restriction-flag=1,restriction-start-time=now)
Algorithm 2 Resume-TransmissionResumeTransmissionFromReceiver(Packetp)Require: p is the packet that has been just successfully
transmitted to the downstream node by the MAC layer1: block-flag=CheckFlowTable(p)2: number-of-packets=CheckFlowTable(p)3: if block-flag= 1 and4: number-of-packets< backward-pressure-threshold5: TransmitCTSR()6: end ifRecvCTSR(Packetp)1: data=GetDataPktFromQueue(p)2: if data! =NULL3: TransmitDATA(data)4: endResumeTransmissionFromTransmitter(Packetdata)Require: The retransmission timer for the restricted flow
expires at the transmitter.data is one packet of the restricted flow in the queue
1: TransmitRTSM(Packetdata)
Figure 11–5:The algorithms of backward-pressure scheme
the considered node cannot capture the channel in time. OPET can efficiently force the
upstream nodes of these flows to yield the channel access opportunity to the congested
nodes which then can quickly forward the backlogged packets and hence eliminate the
congestion.
It is important to note that the control overhead of the backward-pressure scheduling
is very small. The information of backward-pressure is carried by the original message
sequences RTS/CTS in IEEE 802.11. And the blocked flow is resumed by a three-way
268
RTSM
CTS
DATAACK
A BRTSM
NCTSCTSR
DATAACK
Non-last hop transmission
Block Transmission
ResumeTransmission
RTS
CTS
DATAACK
A B
Last hoptransmission
BA BA
A: Transmitter; B: Receiver
Figure 11–6:Message sequence for packet transmission
handshake procedure with less overhead than the original four-way handshake. Moreover,
our scheme only maintains several small entries for each active flow, which has at least
one packet at the considered node. In a mobile ad hoc network, the number of active flows
per node is restricted by the limited bandwidth and processing capability, and hence is of
much smaller order than in the wired networks, thus the scalability problem should not be
a major concern in our scheme.
11.3.4 Rule 3: Source Self-Constraint Scheme
When adopting the backward-pressure scheduling, the packets can only be accumu-
lated at the source node. The application at the source should slow its sending rate if the
number of its packets reaches thesource-flow thresholdin the outgoing queue. If it fails
to do so, the queue should drop the succeeding packets from it. This could prevent the
congested flow from occupying the whole queue space, thus other flows could always have
chances to utilize the queue space and transmit packets.
Our scheme OPET sets thesource-flow thresholdas the smallest integer greater than
c + h/4, whereh is the hop count for each flow. The quantityc indicates the maximum
burst of the packets that the queue can tolerate for the flow.h/4 comes from the optimum
scheduling of the chain topology which allows simultaneous transmission at nodes which
are4 hops away. Considering that the channel is shared by other traffic flows in a random
topology, the achievable throughput is equal to or less than that when there is only a single
flow. Therefore, in a general topology,c+h/4 is large enough to saturate a path if the source
is greedy, and vacates more queue space for traversing flows than that when there is no such
269
0 1 2 3 4 5 6 7 8t0t1t2t3t4t5t6t
3
01
2
11
2
(a) The packet scheduling when congestion occurs at node 4
0 1 2 3 4 5 6 7 8
t0t1t2t3t4t5t6t7t8t9t10
t
3
0
3
1
00
4
2
01
01234...
11
122
22
233
t11t12
n The nth packet
(b) The packet scheduling after eliminating the congestion at node 4
Figure 11–7:The packet scheduling for resolving congestion
270
source self-constraint scheme. This threshold is applied to UDP flows, and is optional to
TCP flows. Notice that TCP can only inject packets up to the receiver’s advertised window
size into the queue. Furthermore, Chen et al. [24] have discovered in their simulation
that TCP’s congestion window size should be less thankN when considering transmission
interference at the MAC layer, where1/8 < k < 1/4 andN is the number of round-trip
hops. Soc + h/4 should work for TCP flows if we set the congestion window limit less
than the upper boundkN .
11.3.5 Rule 4: Round Robin Scheduling
Flow-based Round Robin scheduling is adopted in our scheme for the queue man-
agement. It aims to further address the unfairness problem resulting from greedy sources,
especially those of one-hop flows. When a greedy source node is also a forwarding node of
other flows, it may continuously transmit multiple packets generated from its own applica-
tions if using FIFO scheduling because it could have as many packets as what these appli-
cations can inject into the queue. The upstream nodes of traversing flows need to contend
for the channel with this node to squeeze the packets into the limited queue space, most of
which may have been occupied by the packets generated at this node. Round Robin scheme
can efficiently allow the traversing traffic to pass through and hence avoid the starvation
problem. Notice that, if the flow that the head-of-queue packets belong to are blocked by
its downstream node, the node should attempt to transmit packets of those flows which
are not blocked and may have better path characteristics to avoid head-of-queue blocking
problem.
Another option to solve the unfairness problem due to greedy source is to allocate a
separate queue space for the packets originated at the considered node. And only when
the amount of data of each source flow in the shared outgoing queue is smaller than a
certain threshold, i.e., backward-pressure threshold in the proposed scheme, the packets in
the separate queue can be passed to the shared outgoing queue. Apparently, this method
requires additional queue space.
271
Round Robin cannot only address the unfairness problem due to greedy source, but
also provide fairer scheduling for the traversing flows than FCFS (first come, first served).
If variable sizes of packets are used in the network, Deficit Round Robin (DRR) [115] or
Surplus Round Robin (SRR) [3] could be used. Different fair queueing schemes within the
proposed framework will be evaluated in our future work.
11.4 Performance Evaluation
We now evaluate the performance of our scheme OPET and compare it with the IEEE
802.11 MAC protocol. The simulation tool is one of the widely used network simulation
tools - ns-2. We use pre-computed shortest path and there is no routing overhead if oth-
erwise indicated. The channel bandwidth is2 Mbps and the payload size of each DATA
packet is1000 bytes. The transmission range is 250 meters, and the sensing range is 550
meters.
In our simulations, the following several important performance metrics are evaluated.
Aggregate end-to-end throughput– The amount of data delivered to the destinations
per second.
Average End-to-end delay– The average end-to-end delay of all packets which reach
the destinations.
Data Transmission efficiency– The ratio of the sum of hop counts of those successfully
delivered packets to the number of those DATA packets transmitted. This metric reflects
the resource wasted by the collided DATA packets, and those DATA packets which are
discarded due to overflow of queue at the intermediate nodes of the path.
Normalized control overhead– The ratio of the number of all kinds of control packets
including RTS(M), (N)CTS, CTSR and ACK to the sum of hop counts passed by those
successfully delivered DATA packets.
Fairness index– The commonly used fairness index for all flowsxi(1 6 i 6 n), i.e.,
f = (∑n
i=1 xi)2/ (n ·∑n
i=1 x2i ), wherexi denotes the end-to-end throughput of theith
flow.
272
0 0.2 0.4 0.6 0.8 10
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Total offered load (Mbps)
Agg
rega
t th
roug
hput
(M
bps)
Basic-onewayOPET-onewayBasic-twowayOPET-twowayBasic-crossOPET-cross
(a)
0 0.2 0.4 0.6 0.8 10
1
2
3
4
5
6
7
8
Total offered load (Mbps)
End
-to-
end
dela
y (S
econ
ds)
(b)
Figure 11–8:Simulation results for the 9-node chain topology (Fig.11–3) and cross topol-ogy (Fig.11–1(b))
In the simulation study, our scheme will be referred to as the Optimum Packet Schedul-
ing for Each Flow (OPET), and the IEEE 802.11 protocol without the packet scheduling
algorithm will be referred to as the Basic scheme.
11.4.1 Simple Scenarios
We first investigate how well our scheme works in the simple scenarios, i.e., the nine-
node chain topology with one way traffic and two-way traffic shown in Fig.11–3, and the
cross traffic scenario shown in Fig.11–1(b).
Fig. 11–8shows that our scheme improves the throughput by55%, 120% and33%
compared to the IEEE 802.11 in these three scenarios under heavy traffic load, respectively.
We observe that our scheme maintains a small and stable end-to-end delay at all traffic
status while the end-to-end delay increases dramatically with increasing traffic load in the
IEEE 802.11 protocol. The reason is straightforward because that our scheme reduces a lot
of MAC layer contentions, i.e., the intra-flow contention and the inter-flow contention, and
removes the excessive queueing delay at the forwarding nodes.
273
11.4.2 Random Topology
In these simulations,60 nodes are randomly placed in a1000m × 1000m area. The
source of each flow randomly selects one node as the destination, which is at leastm hops
away from the source. In our studies, we choosem = 1 or 3. There are total30 flows
with the same CBR/UDP traffic in the network. All results are averaged over30 random
simulations with300 simulated seconds each.
We observe from Fig.11–9(a)that when the minimum number of hops for each
flow increases, the aggregated end-to-end throughput of both protocols decreases. This is
reasonable because packets of multihop flows with longer path have to pass more links and
thus consume more resource for the same arriving traffic.
For the random traffic without hop count limitation, our scheme OPET could improve
the end-to-end throughput by100% under heavy traffic. This is because that OPET reduces
a lot of channel contentions due to theintra-flow contentionandinter-flow contention, and
there are much less accumulated packets which are eventually dropped by the forwarding
nodes. The reason that basic scheme could maintain certain throughput under heavy traffic
is that IEEE 802.11 MAC protocol gives preference to those one hop or two-hop flows
which have no or much less contentions from hidden terminals. These flows could capture
the whole bandwidth under heavy traffic which contributes to the aggregated end-to-end
throughput. However, other flows with longer paths are starved with zero throughputs
as shown in Fig.11–9(b), which shows one random example of throughput distribution
among flows under heavy traffic and also shows the improved fairness in OPET.
If source-destination pairs of all flows are at least 3 hops away, OPET could still
maintain high end-to-end throughput at heavy traffic load while Basic scheme almost drops
to zero end-to-end throughput. In Basic scheme, theintra-flow contentioncould allow the
sources of multihop flows to inject more packets into the network than the network can
forward. Theinter-flow contentionmakes the situation worse. It is not surprising in the
Basic scheme that the longer path the flow has, the lower the end-to-end throughput it can
274
0 2 4 6 8 100
0.5
1
1.5
End
-to-
end
thro
ughp
ut (
Mbp
s)
Total offered load (Mbps)
OPET-1hopsBasic-1hopsOPET-3hopsBasic-3hops
(a)
0 5 10 15 20 25 3010
0
101
102
103
104
105
Flow ID (total 30 flows)
End
-to-
end
thro
ughp
ut (
# of
pkt
s)
OPETBasic# of hops
1 2
3 4 6 7
(b)
0 2 4 6 8 100
5
10
15
20
25
End
-to-
end
dela
y (s
econ
ds)
Total offered load (Mbps)
OPET-1hopsBasic-1hopsOPET-3hopsBasic-3hops
(c)
0 2 4 6 8 100
20
40
60
80
Nor
mal
ized
con
trol
ove
rhea
d
Total offered load (Mbps)
OPET-1hopsBasic-1hopsOPET-3hopsBasic-3hops
(d)
0 2 4 6 8 100
0.2
0.4
0.6
0.8
1
Dat
a T
rans
mis
sion
eff
icie
ncy
Total offered load (Mbps)
OPET-1hopsBasic-1hopsOPET-3hopsBasic-3hops
(e)
0 2 4 6 8 100
0.2
0.4
0.6
0.8
1
Fai
rene
ss in
dex
Total offered load (Mbps)
OPET-1hopsBasic-1hopsOPET-3hopsBasic-3hops
(f)
Figure 11–9:Simulation results for the random topology
275
achieve. By reducing theintra-flowandinter-flow contention, our scheme always maintains
the high end-to-end throughput for all flows at any traffic load and the improvement is more
than12 times under heavy traffic comparing to IEEE 802.11 protocol.
Fig. 11–9(c)shows that OPET has much smaller end-to-end delay than the Basic
scheme. Also, for multihop flows, our scheme provides stable end-to-end delay in spite of
high traffic load, while in the Basic scheme, the end-to-end delay rapidly increases with
the offered load. This is because that OPET reduces a lot of accumulated packets in the
outgoing queue at each node and thus greatly reduces the queueing delay. In addition,
OPET reduces the contentions from the intra-flow and inter-flow contention, which could
also decrease the delay at the MAC layer to access the channel. It also verifies that in OPET
there is no severe congestion which can result in excessive queueing delay at the forwarded
nodes.
Fig. 11–9(e)shows that OPET achieves better transmission efficiency of DATA pack-
ets as high as about90%, while the Basic scheme has much lower value, i.e., even less than
5% for multihop flows. This metric indicates that the Basic scheme discards a lot of packets
that the sources send out, which have not reached the intended destinations. This implies
that these packets waste a lot of wireless bandwidth and consume significant power. OPET
greatly reduces this kind of waste and utilizes the resource to achieve higher end-to-end
throughput.
In OPET, the transmission efficiency of DATA packets is still less than1. This is be-
cause that OPET is still running on the contention based MAC protocol, i.e., IEEE 802.11
MAC protocol. There exists hidden terminal problem which results in DATA packet colli-
sions.
Fig. 11–9(d)shows that OPET could maintain small and stable normalized control
overhead. This verifies that OPET can reduce a lot of collisions at the MAC layer and
hence save a lot of unsuccessful RTS/CTS negotiations and DATA transmissions. The
Basic scheme has much higher control overhead, which rapidly increases with the offered
276
load for multihop flows. This implies that the Basic scheme is not appropriate for multihop
ad hoc networks while OPET is a good choice for the multihop flows in the shared wireless
channel environment and is scalable for larger networks where there are more multihop
flows with longer paths.
Fig. 11–9(f)shows that OPET improves the fairness index by up to100% compared to
the Basic scheme. As in the random example shown in Fig.11–9(b), the Basic scheme only
takes care of one or two hops flows while starving all other multihop flows. It is not fair to
multihop flows with large hop counts. OPET gives much more bandwidth to multihop flows
with large hop counts than the Basic scheme. The fairness index is still much less than one
in our scheme because the traffic distribution is unbalanced in the random scenarios and
the flows with shorter paths still have advantages over the flows with longer paths.
11.4.3 Random Topology with Mobility
In the simulations,60 nodes are randomly placed in a1000m×1000m area. All nodes
are randomly moving in the rectangular grid with a randomly chosen speed (uniformly
distributed between0− 10m/s). There are total30 flows with the same CBR/UDP traffic.
The source of each flow randomly selects one node as the destination. The routing scheme
is AODV [108]. All results are averaged over30 random simulations with300 seconds
simulated time each.
The purpose considering the mobility is only to illustrate that our scheme can work
well in the mobile scenarios with on-demand routing scheme. In fact, we find in the ex-
tensive simulations that mobility does not change the results much. Therefore, we only
show the aggregated end-to-end throughput in Fig.11–10, which shows OPET has about
50% higher throughput than the Basic scheme. All other performance metrics are also sim-
ilar with the scenario where the source and destination are randomly selected without hop
count limitation in the static topology.
We also notice that mobility decreases the throughput. This is because that the route
may be unavailable during certain periods due to mobility although each source has a route
277
0 5 10 150.2
0.3
0.4
0.5
0.6
0.7
0.8
Total offered load (Mbps)
End
-to-
end
thro
ughp
ut (
Mbp
s)
OPETBasic
Figure 11–10:Simulation results for the random topology with mobility
to its destination at the start time. In addition, the extensive simulations also indicate that
mobility increases the end-to-end delay because the route searching and repairing time
comes into play.
11.4.4 Simulation results for TCP traffic
We first investigate how well our scheme performs in the nine-node chain topology
with different number of TCP flows. Fig.11–11(a)shows that our scheme OPET can
reduce the packet collision by about40% for both RTS and ACK frames. And the number
of dropped TCP packets is also reduced by about80%. This verifies that the hop-by-hop
congestion control can effectively reduce a lot of medium contention and collision. Fig.
11–11(b)demonstrates that OPET can improve the aggregate throughput of TCP flows by
about5%. And the fairness is even better than the Basic scheme. Fig.11–11(c)illustrates
that TCP source node can detect the congestion status by simply observing its queue length
if OPET is used and may accordingly change the sending rate to obtain better performance.
Now we examine the TCP performance in a larger network with grid topology, where
inter-flow contention is a common phenomenon. The grid topology is shown in Fig.11–12,
where there are total 100 nodes, and one-hop distance is set as 200 meters. 16 TCP flows
with 8 horizontal ones and 8 vertical ones run for 300 seconds in the simulation. Com-
pared with the Basic scheme, OPET improves the end-to-end throughput from 547Kbps to
278
2 4 6 8 10170
190
210
230
250
270
Number of TCP flows2 4 6 8 10
0
0.5
1
1.5
2
2.5
3
Number of TCP flows
OPET: collided RTS/sBasic: collided RTS/s
OPET: collided ACK/sBasic: collided ACK/sOPET: dropped pkts/sBasic: dropped pkts/s
(a) Collisions of TCP traffic in chain topology
2 4 6 8 10200
210
220
230
Number of TCP flows
Thr
ough
put
(Kbp
s) OPETBasic
2 4 6 8 100.98
0.99
1
Number of TCP flows
Fai
rnes
s
(b) Throughput and fairness in chain topology
0 1 2 3 4 5 6 7 80
5
10
15
20
25
Node ID
OPET: 3 TCPOPET: 6 TCPBasic: 3TCPBasic: 6 TCP
(c) Queue length for TCP traffic in chain topology
Figure 11–11:Simulation results for the TCP traffic
279
( TCP flows: )
Figure 11–12:Grid topology with 16 TCP flows
603Kbps by about 10%, and reduces the collided RTS from 1015pkt/s to 802pkt/s by about
26%.
These results show that OPET can also improve TCP performance, although TCP
flows tend to generate burstier traffic. This is because that OPET can reduce lots of packet
droppings due to MAC collisions as well as queue overflow and hence TCP source con-
ducts much less retransmissions and also experiences much less oscillation in the sending
window size. The optimization of the interaction between TCP and OPET should provide
better support for TCP traffic and will be studied in future work.
11.4.5 Notes on the relative benefits of the four techniques
The first mechanism which assigns higher channel access priority to the downstream
nodes when they receive packets works very well for chain topology with one way traffic.
It gives an efficient solution to the intra-flow contention problem. However, when inter-
flow contention comes into play in a more general topology, the MAC layer contention is
still severe if only the first mechanism is used. In these scenarios, the combination with the
backward-pressure scheduling greatly alleviates both intra-flow and inter-flow contentions
and contributes to the performance gain in end-to-end throughput and delay. Despite the
greatly improved aggregate performance, fairness is not improved much and starvation
is still a severe problem for multihop flows especially when some source nodes are also
working as forwarding nodes. Therefore, we introduce source self-constraint scheme and
280
Round Robin scheme into the framework. The previous one allocates certain queue space
for traversing packets to alleviate the possibility of being dropped due to queue overflow.
The latter allows that the traversing flows obtain relative fair throughputs compared with
the flows generated at this node that may often occupy most of the queue space and hence
may have more chances to be transmitted. More extensive simulation results to illustrate
these relative benefits are omitted here due to the page limit.
11.5 Related Works and Discussion
Recently, many schemes are presented to alleviate the MAC layer collisions. The au-
thors in the paper [120, 48] proposed receiver-initiated transmission schemes which work
well when the intended receiver knows exactly the traffic load information. Wang and
Garcia-Luna-Aceves [132] proposed a hybrid channel access scheme which combines both
sender-initiated and receiver-initiated collision avoidance handshake. Their scheme could
alleviate the fairness problem in some cases without sacrificing much throughput and sim-
plicity, but cannot trigger the desired receiver-initiated collision avoidance handshake in
some scenarios due to the lack of flow contention information. Berger et al. [10] pre-
sented two MAC layer enhancements, i.e., quick-exchange and fast-forward, to address
self-contention in ad-hoc networks. The previous one allows the receiver to return a DATA
packet to the sender and the latter includes an implicit RTS to the next hop. They could
save some transmission negotiation procedures, i.e., the RTS/CTS exchanges.
In the last few years, several papers ([98, 81]) have been reported for the distrib-
uted packet scheduling which considers the MAC layer collisions in the multihop ad hoc
networks. The proposed schemes used different backoff window size to assign different
priorities for packets to access the channel. Luo et al. [98] constructed the flow contention
graph to achieve better fairness among one-hop flows between different node pairs. Kan-
odia et al. [81] applied EDF (Early Deadline First) criteria to obtain smaller end-to-end
delay than the original IEEE 802.11, although congestion is not fully addressed and the
delay still increases dramatically with the increasing offered load.
281
Traditional end-to-end congestion control, TCP, has been shown to be inefficient in
ad hoc networks in many recent papers ([28, 29, 24, 46, 50, 51, 45, 140] and references
therein). Most of the current work to improve TCP performance, such as [23, 64, 103, 95,
40, 127, 46, 24, 139], focus on end-to-end congestion control mechanism of TCP with or
without network layer feedback. The proposed schemes did not fully address the impact of
MAC layer performance and still suffered from the severe MAC layer contentions. Gupta
et al. [57] used back-pressure concept to provide a fair channel access to TCP flows un-
der heavy UDP traffic with an implementation of a virtual, globally accessible array that
dynamically records the queue lengths for each flow at each node in the network. Monks
et al. [103] conducted simulations to illustrate the limitations of TCP-ELFN [64] and dis-
cussed the pros and cons of end-to-end control and hop-by-hop control. They argue that
the advantages of hop-by-hop control may outweigh its drawbacks.
Hop-by-hop congestion control has been studied in wired networks especially in ATM
networks ([89, 100]). But these schemes cannot be directly applied in ad hoc networks due
to the completely different MAC and physical layers. To our best knowledge, in recent
studies, only [146] comprehensively discussed hop-by-hop congestion control for ad hoc
networks. The authors formulated an optimization problem and studied the end-to-end
throughput under both hop-by-hop congestion control and the end-to-end congestion con-
trol. Their model only considered the channel sharing for those nodes with same flows
passing through, and did not consider other medium contention among nodes which are
in the sensing range or interference range of each other. Compared to congestion control
schemes for wired networks, we can regard CTSR packets in OPET as a kind of credit
like those in credit-based flow control scheme [89]. And NCTS packets can be regarded as
a kind of hop-by-hop source Quench although OPET does not require the cooperation of
transport and application layers [111].
To the best of our knowledge, there are no comprehensive studies to effectively ad-
dress theintra-flow contentionand inter-flow contentionproblems in multihop mobile ad
282
hoc networks, which result in serious problems, such as “explosion” of control packets, se-
vere collisions of data packets, poor throughput and fairness, excessively long end-to-end
delay, congestion, and poor scalability. Thus, all the prior works only contribute to the
improvement of one or two of these performance metrics while sacrificing other metrics
more or less. By tackling these two key problems with a novel cross-layer design, our
scheme could improve all these metrics for both UDP and TCP traffic, which is significant
departure from most recent works.
11.6 Conclusions
In this chapter, we first discuss the causes of poor performance of the IEEE 802.11,
i.e., theintra-flow contentionandinter-flow contention, in multihop ad hoc networks. In or-
der to reduce these two kinds of contentions, we have proposed a framework of distributed
flow control and media access, based on which one multihop packet scheduling algorithm,
i.e., OPET, is proposed for the IEEE 802.11 multihop ad hoc networks. Extensive simula-
tions verify that our scheme OPET greatly reduces excessive collisions at the MAC layer,
quickly eliminates the congestion, and has a much better multihop packet scheduling than
the IEEE 802.11 MAC protocol. Thus it could achieve stable and high throughput and
shorter end-to-end delay independent of traffic load, while IEEE 802.11 MAC protocol
performs very poorly in terms of these two metrics for multihop flows. In addition, com-
pared to the IEEE 802.11 MAC protocol, OPET has better fairness, much fewer dropped
DATA packets, and stabler control overhead. Thus, OPET provides a very stable link layer
and is scalable for large networks where there are many multihop flows with long paths
without incurring explosion of control packets under heavy load.
CHAPTER 12WCCP: IMPROVING TRANSPORT LAYER PERFORMANCE IN MULTIHOP AD
HOC NETWORKS BY EXPLOITING MAC LAYER INFORMATION
The traditional TCP congestion control mechanism encounters a number of new prob-
lems and suffers a poor performance when applied in multihop ad hoc networks. Many of
the problems result from medium contention at the MAC layer. In this chapter, we first
illustrate that severe medium contention and congestion are intimately coupled, and the
window based congestion control algorithm becomes too coarse in its granularity, causing
throughput instability and overly large delay. Further, we illustrate TCP’s severe unfairness
problem due to the medium contention and the tradeoff between aggregate throughput and
fairness. Then, based on the novel use of channel busyness ratio, which we show is an
accurate sign of the network utilization and congestion status, a new wireless congestion
control protocol (WCCP) has been proposed to efficiently and fairly support the transport
service in multihop ad hoc networks. In WCCP, each forwarding node along a traffic flow
exercises the inter-node and intra-node fair resource allocation and determines the network
layer feedback accordingly. The end-to-end feedback, which is ultimately determined by
the bottleneck node along the flow, is carried back to the source to control its sending rate.
Extensive simulations show that WCCP significantly outperforms traditional TCP in terms
of channel utilization, delay, and fairness, and eliminates the starvation problem.
12.1 Introduction
Wireless ad hoc networks have found many applications in battlefield, disaster rescue
and conventions, where fixed communications infrastructures are not available and quick
network configurations are needed. To provide reliable transport service over and hence
fully exploit the potential of ad hoc networks, efficient congestion control is of paramount
importance.
283
284
Unfortunately, the traditional TCP congestion control mechanism performs very poorly,
as shown in recent studies ([28, 29, 24, 46, 50, 51, 45, 140] and reference therein). TCP
congestion control has an implicit assumption, i.e., any packet loss is due to network con-
gestion. However, this assumption is no longer valid in the ad hoc networks as packet
losses may well be due to channel bit errors, medium contention, and route failures.
Several works have pointed out that greedy TCP can result in severe congestion in
ad hoc networks and hence performance degradation. Link-RED [46] was proposed to
mark or drop TCP packets according to observed packet collisions. Subsequently the TCP
source will reduce congestion window size before it becomes excessive large. To avoid
congestion, Chen et al. dynamically adjusted the congestion window limit according to
path length of TCP flows [24]. In the paper [139], a neighborhood RED scheme was
proposed to alleviate TCP fairness problem by adjusting marking/dropping probability in
light of observed channel information.
Meanwhile, to alleviate the adverse impact of mobility, several schemes were pro-
posed, such as those in the papers [23, 64, 103, 95]. The design philosophy is to distin-
guish route failures from topology changes and network congestion through explicit route
failure notifications. Other schemes like [40, 127], instead of using the network layer feed-
back, keep the TCP states unchanged when the source first detects out-of-order packets and
retransmission timeout.
In this chapter, we mainly focus on the problems arising from medium contention. In
Section12.2, we show that a rate based congestion control protocol is more appropriate
than its window based counterpart in multihop ad hoc networks. We illustrate the close
coupling between congestion and medium contention, which explains the instability of
TCP. Then we find that the optimum congestion window size of TCP may be less than
one even in very simple topology, say chain topology, in order to maximize the end-to-end
throughput and minimize the end-to-end delay.
285
Nevertheless, it is not an easy task to conduct accurate end-to-end rate control in the ad
hoc environment, despite the fact that the explicit and precise congestion feedback for end-
to-end control has been extensively studied in the Internet and ATM network [83, 2]. This
is because each node is in dire need of a robust and easily measured metric to adjust the
feedback for each passing packet. While packet loss, queue length, and link utilization are
good measures for these wired networks, they cannot be directly applied to ad hoc networks
for two main reasons. First, the occurrence of packet loss and large queue length may
indicate that severe congestion has already happened due to medium contention, thereby
leaving no time for the network to react promptly. Second, unlike a wired link normally
between two nodes, a wireless link is shared by all the neighboring nodes. Consequently,
any change in the status of the wireless link is much harder to trace, which in turn renders
accurate control extremely difficult. To overcome this difficulty, we propose to use a novel
measure to reflect wireless link status. The channel busyness ratio, as shown in Section
12.3.1, is a timely and accurate sign of the network utilization as well as congestion.
Then in Section12.3, we propose a new wireless congestion control protocol (WCCP)
based upon the channel busyness ratio. In this protocol, each forwarding node determines
the inter-node and intra-node fair channel resource allocation and allocates the resource
to the passing flows by monitoring and possibly overwriting the feedback field of the data
packets according to its measured channel busyness ratio. The feedback is then carried back
to the source by the destination, which copies it from the data packet to its corresponding
acknowledgement. Finally, the source adjusts the sending rate accordingly. Clearly, the
sending rate of each flow is determined by the channel utilization status at the bottleneck
node. In this way, WCCP is able to approach the max-min fairness ([11]) in certain scenar-
ios.
We compare WCCP with TCP through extensive simulations in Section12.4. WCCP
significantly outperforms TCP in terms of channel utilization, delay, and fairness. Espe-
cially, it solves the starvation problem suffered by TCP.
286
1 2 3 4 5 6 7 8 9
Figure 12–1:Chain topology with 9 nodes. Small circles denote the transmission range,and the large circles denote the sensing range
Finally, we note that WCCP is not meant to attack the problems caused by mobility.
As a result, WCCP is the most useful in static multihop ad hoc networks. However, one
can combine some schemes proposed in the papers [23, 64, 103, 95, 40, 127] and WCCP
to alleviate the performance degradation due to mobility. This will be further explored in
our future work. Conclusions are given in Section12.5.
12.2 Medium Contention and Its Impact
12.2.1 TCP Performance Degradation Due to Coupling of Congestion and MediumContention
To illustrate the coupling of congestion and medium contention, we use ns 2.27 ([106])
to conduct a set of simulations over a 9-node chain topology as shown in Fig.12–1. One
or more TCP flows with 1000 bytes long payload traverse from node 1 to node 9. The pre-
computed shortest path is used, so there is no routing overhead. The channel bandwidth is
2 Mbps. Simulations each run for 300 seconds.
We can see from Fig.12–2(a)that TCP traffic introduces a lot of collisions. Though
there is a retransmission mechanism for RTS and DATA frames at the MAC layer ([68]),
there are still many TCP packets dropped at a rate of0.83 ∼ 3.63 pkts/s because of medium
contentions. Note that no packet loss is observed for queue overflow.
Fig. 12–2(b)demonstrates that TCP traffic is unstable in the wireless multihop envi-
ronment. The round trip time (RT) oscillates dramatically, and so does the instantaneous
287
160
180
200
220
1 2 3 4 5 6 7 8 9 100
2
4
Number of TCP flows
Collided RTS/second
Dropped Packets / seconddue to collisions
(a) Performance with different number of TCP flows
0 50 100 150 200 250 3000
0.5
1
1.5
(a) time (s)
50 52 54 56 58 60210
220
230
240
(b) time (s)
Round trip time (s)
Sequence number
(b) Flow 1’s performance when there are 5 flows
Figure 12–2:Simulation results for 9-node chain topology
288
throughput, which can be obtained by differentiating the number of delivered packets (se-
quence number) with respect to time.
All of these observations can be attributed to the greedy properties of TCP and the cou-
pling of congestion and contention. TCP will continually increase the congestion window
size until it detects a packet loss. When the sending rate of TCP sources surpass the chan-
nel capacity, packets start to cumulate along the path. When the neighboring nodes all have
packets to transmit, they keep contending for the channel. Consequently, more collisions
happen and hence the channel contention delay increases, slowing down the forwarding
rate and exacerbating congestion. Thus the congestion and collision form a positive feed-
back loop until there are some packets dropped due to continual collisions and such losses
are detected by TCP sources from retransmission timeouts or delayed duplicate ACKs.
If dynamic routing schemes are used in multihop ad hoc networks, the situation will
become worse. Since the MAC layer cannot distinguish whether the losses are due to
collision or unreachable next hops, it will report false link/route failures when packets are
dropped due to collisions. Then, the routing layer will launch time-consuming route search
or re-routing, thereby increasing end-to-end delay.
12.2.2 Optimal Congestion Window Size for TCP and Ideal Sending Rate
In this subsection we show that how the nature of medium contention in multihop ad
hoc networks dictates the optimal sending rate per RTT and hence invalidate the window
based congestion control mechanism for TCP.
Li et al. [91] have shown that, in chain topology like that in Fig.12–1, the maximum
channel utilization of a chain of ad hoc nodes is1/4 by scheduling the nodes four hops
away to transmit simultaneously. Thus the optimal sending rateRo from the source cannot
be higher than that to make the above schedule feasible. Higher sending rate will result
in packet collisions and losses, and hence low throughput and long delay. At rateRo, the
packet is delivered to the destination in shortest time without encountering much medium
collision and long queueing delay. Also, RTT, denoted byRTTo, is small. Assuming there
289
areN TCP flows from node 1 to node 9, the optimal sending rate of each TCP flow is
Roeachtcp = Ro/N, (12.1)
Keachtcp = RTTo ×Ro/N, (12.2)
whereKeachtcp is the number of packets sent by each TCP source perRTTo.
According to the above optimal schedule, we can find out the optimal aggregate send-
ing rate. Here again, we use simulation to illustrateRo when the 802.11 MAC is used.
CBR/UDP traffic with the same packet length as TCP DATA packets flows from node 1 to
node 9. Reverse CBR/UDP traffic with the same packet length as TCP ACK packets flows
in the reverse direction. The CBR traffic in two directions has the same packet sending rate
(pkt/s). We gradually increase the sending rate until the DATA packet dropping ratio due
to collision is larger than0.1 pkt/s in the 300 seconds simulation. Notice that further in-
creasing the sending rate is followed by dramatic increase in collision and packet dropping
ratio. The results are summarized in Table12–1, where the performance for five TCP flows
is also included for comparison.
Table 12–1:Simulation results for TCP and UDP flows
Traffic type UDP (node 1 to 9) 5 TCP flowsAggregate throughput (Kbps) 198 196Average end-to-end delay (s) 0.0695 0.431RTT(s) 0.139∗ 0.738Dropped packets / s due to collision 0.0931 2.90∗ the sum of average end-to-end delay of the two UDP flows
The corresponding aggregate sending rate is about24.3 pkt/s given that the DATA
packet dropping ratio due to collision is less than0.1 pkt/s. And RTTo = 0.139s and
Keachtcp = 0.676pkt/RTT when N = 5. Apparently, the more TCP flows there are, the
smallerKeachtcp is. Since the optimal sending rate per RTT is less than one packet/RTT,
we can see that window based congestion control protocols such as TCP tend to overshoot
the network capacity as the minimum increase in window size is one packet. In other
290
words, the granularity of window based congestion control mechanism is too coarse. In
this sense, window based protocols are not appropriate for supporting stable and reliable
transport service in multihop ad hoc networks. Therefore, to provide high throughput, short
delay and stable performance with few packet collisions, we opt for an efficient rate-based
congestion control algorithm detailed in Section12.3.
12.2.3 Unfairness Problem Due to Medium Contention
Medium contention is also a major source for unfairness in two aspects. First, different
flows may traverse through different geographical regions. They may encounter different
level of contention due to the various number of contending nodes in each region, and get
different allocation of the shared wireless channel. Second, unfairness problem exists for
flows with various path lengths since the flow with longer path consumes more channel
resource and is likely to encounter more medium contention and drop more packets.
Starvation is a severe unfairness problem suffered by TCP flows in multihop ad hoc
networks, which can be attributed to the medium contention. The hidden terminal and
receiver blocking problems ([165, 161, 153]) are common in multihop ad hoc networks.
Together with the greediness of TCP flows, they contribute to flow starvation as well as
packet collision. For example, as shown in Fig.12–1, suppose that there are two TCP
flows passing through the links 1 to 2 and 4 to 5 separately. When node 4 is transmitting
packets to node 5, node 2 is a blocked receiver of node 1 since node 2 senses the busy
channel and cannot respond to node 1. As a result, node 1 keeps doubling its contention
window and retransmitting the RTS packet until dropping it. After node 4 finishes the
transmission, it resets its contention window and hence has higher priority than node 1
in grabbing the channel. The hidden terminal problem makes node 1’s situation worse.
Suppose node 1 successfully contends for the channel with a successful handshake of RTS
and CTS packets and begins to transmit the DATA packet. During the long period of the
DATA transmission, node 4 may initiate a new transmission to node 5 since it senses the
channel is idle. This transmission will collide with node 1’s transmission at node 2. Thus
291
1 2 3 4 5 6 7 8 9
flow 1 flow 3
flow 2
1 2 3 4 5 6 7 8 9
flow 2,3,4,5 flow 6
flow 1
1 2 3 4 5 6 7 8 9
flow 1
(a) Scenario 1
(b) Scenario 2
(c) Scenario 3
Figure 12–3:Nine-node chain topology with different traffic distribution
the flow passing through the link 1 to 2 will be starved if there is a greedy flow passing
through the link 4 to 5.
It is important to note that there is a tradeoff between fairness and aggregate through-
put. It is known that spatial reuse of the channel bandwidth can be achieved by scheduling
simultaneous transmissions whose regions are not in conflict. However, as said above,
different flows may experience contention of different degree. Achieving fairness among
those flows requires allocating the channel to flows with heave contention for a long time
share, which correspondingly reduces the channel reuse and hence the aggregate through-
put. In addition, while maximum throughput is the channel bandwidth for a one-hop flow,
it reduces to one half, one third, and one fourth of the channel bandwidth for a two-, three-,
and four-hop flow, respectively. Therefore, fair throughput allocation for flows with dif-
ferent path lengths in the same region has to be achieved at the expense of the aggregate
throughput.
Fig. 12–3shows a few more examples of unfairness. In Fig.12–3(b), flow 2 tra-
verses more hops and suffers more medium contention. Consequently, it has to drop more
packets and suffers more serious throughput degradation than flow 1 and 3. In Fig.12–3
(c), flow 6 suffers no hidden terminal and receiver blocking problems while flow 1 does
292
and and could be starved. Assigning channel resource to flow 1 will result in the decrease
of aggregate throughput due to the shared channel resource. With perfect scheduling, the
throughputs for the six flows are(1/12, 1/12, 1/12, 1/12, 1/12, 1/3) of the channel band-
width for max-min fair allocation and(0, 1/8, 1/8, 1/8, 1/8, 1/2) for maximizing the ag-
gregate throughput and maintaining fairness among flow 2, 3, 4, and 5 at the same time.
The aggregate throughputs for these two cases are3/4 and1 of the channel bandwidth,
respectively. Clearly, this demonstrates the tradeoff between the aggregate throughput and
fairness. We will give the simulation results for these scenarios in Section12.4and show
WCCP approaches the max-min fairness in certain scenarios.
12.3 Wireless Congestion Control Protocol (WCCP)
As mentioned in previous section, TCP’s window based congestion control suffers
from a coarse granularity when applied to the multihop ad hoc environment. To overcome
this problem, we propose a rate based wireless congestion control protocol (WCCP) in this
section. First, we discuss how to characterize the channel status and measure the avail-
able bandwidth in the shared channel. Based on the estimate of the available bandwidth,
the inter-node and intra-node resource allocation schemes are proposed to determine the
available channel resource for each node and for each flow passing through that node and
accordingly modify the network layer feedback. Then an end-to-end rate control scheme is
proposed to carry the feedback from the bottleneck node to the source node which accord-
ingly adjust the sending rate to make full and fair utilization of the channel resource at the
bottleneck node without causing severe medium contention and packet collision.
12.3.1 Channel Busyness Ratio: Sign of Congestion and Available Bandwidth
In the rate-based congestion control algorithm, to calculate the ideal sending rate, the
source is in dire need of a timely and easily measured metric which should satisfy two
requirements. First, as mentioned in previous discussion, since MAC contention is tightly
coupled with congestion, a candidate of congestion sign should reflect the condition of
MAC contention and collision. Second, in order to fully utilize the shared channel without
293
0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
Channel busyness ratio
n=5 n=10n=50
Channel utilization
Normalized throughput
Collision probability p
(a) with different number of nodes
0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
Channel busyness ratio
64bytes256bytes1000bytes1500bytes
Channel utilization
Normalized throughput
Collision probability p
(b) with different payload size
Figure 12–4:The relationship between channel busyness ratio and other metrics
causing severe congestion and packet collision, the candidate should indicate the available
bandwidth.
In our previous work [150], we have shown that channel busyness ratio meets these
two requirements. And the main results are shown in Fig.12–4. Channel utilizationcu
indicates the ratio of the channel occupation time for successful transmissions to the total
time, normalized throughputs indicates the achievable data rate for payload divided by the
channel data rate and is proportional tocu, and collision probability indicates the average
possibility that each transmission encounters collision.
Several important results can be observed from Fig.12–4(a). Firstly, before channel
utilizationcu reaches its peak,rb is almost the same ascu and hence can be used to represent
the normalized throughput. Secondly, afterrb exceeds a threshold wherecu reaches its
peak, small increase inrb will causep to increase very fast until saturated status is reached.
This case is certainly undesirable sincep, the queue size, and the queue waiting time will
all become unacceptably large, as indicated in the papers [160, 150]. Finally and most
importantly, the above observations are almost independent of the total number of nodes
in the neighborhood. This is a very nice feature since changes in the number of neighbors
will not affect a node’s perception of the channel utilization or network congestion, as long
as it relies on the observed channel busyness ratio.
294
Now that the channel busyness ratiorb is a good early sign of network congestion, we
can feed the observedrb to the end-to-end control mechanism to control TCP sources and
hence avoid overloading the network. To do so, the key is to choose the threshold, denoted
by thb, for rb to indicate the inception of congestion. Obviously,thb should be chosen such
that
rb ≈ cu(rb 6 thb) (12.3)
Since the performance ofrb is not sensitive ton, we can fixn and observe the effect of the
payload size. Fig.12–4(b)showscu, s, andp as a function ofrb, with different average
payload size of DATA packets whenn = 10. It can be observed that the smaller the average
payload size is, the smallerthb should be, usually, i.e.,90% ∼ 95%. Since the payload size
of 1000 ∼ 1500 bytes is commonly used in ad hoc networks, we setthb to 92% accordingly
and leave 3% space to avoid entering into saturation.
After choosingthb, according to Equation (12.3), we can estimate the available band-
width of each node, denoted byBWa, as the following:
BWa =
BW (thb − rb)data/Ts
0
, (thb > rb)
, (thb 6 rb), (12.4)
whereBW is the transmission rate in bits/s for the DATA packets,data is the average
payload size in unit of channel occupation time, andTs is the average time of a successful
transmission at the MAC layer. Therefore, as long as the channel busyness ratio does not
exceed the threshold, the network works in non-saturated status and the available band-
width could be used to accommodate more traffic without causing severe MAC contention.
Note that the available bandwidth can be shared by all the nodes in the neighborhood in-
cluding the observing node.
12.3.2 Measurement of Channel Busyness Ratio in Multihop Ad Hoc Networks
The channel busyness ratiorb is an easily measured metric at the location of each node
under the current architecture of the IEEE 802.11 standard. Notice that the IEEE 802.11 is
295
a CSMA-based MAC protocol, working on the physical and virtual carrier sensing mech-
anisms. There is already a function to determine whether the channel is busy or not, i.e.,
the channel is determined busy when the measuring node is transmitting, receiving, or its
network allocation vector (NAV) ([68]) indicates the channel is busy, and is idle otherwise.
In the multihop ad hoc network, to overcome the impact of hidden terminal and re-
ceiver blocking problems ([165, 161, 153]) on the estimate of available bandwidth, we
adopt a little different procedure from our previous work ([150, 149]) to determine the
channel busyness ratio for wireless LAN. Specifically, the channel is also determined busy
when the MAC layer has a packet in the backoff procedure due to receiver blocking. For
example, when node A’s intended receiver B is blocked by some ongoing transmissions
which cannot be sensed by A, i.e., the channel resource around B is used but that around A
is idle. Without receiving a response from B, A will double its backoff window and keep
silent for a longer time during which A senses the channel idle but could not accommodate
more traffic since B is blocked.
12.3.3 Inter-node Resource Allocation
According to equation (12.4), each node could calculate the total available bandwidth
for its neighborhood based on the measured channel busyness ratio in a period calledaver-
age control interval, denoted byci. The details on determiningci will be given in Section
12.3.5.
To determine the available bandwidth for each node, WCCP accommodates the chan-
nel resource∆S for each node proportionally to its current traffic loadS in ci. Notice the
linear relationship betweenBWa andBW in equation (12.4), we have
∆S =thb − rb
rb
× S. (12.5)
Because both the input traffic and output traffic of each node consume the shared channel
resource,S should include the total traffic (in bytes), i.e., the sum of the total input and
296
output traffic. In Fig.12–3(b), for example, there are three flows at node 5, and the total
traffic S = r1 + r3 + 2× r2, whereri(1 6 i 6 3) is the traffic of flowi.
Equation (12.5) seems straightforward. However, to better understand it, we need
to elaborate on it. There are two cases when we compare the observedrb with thb, i.e.,
rb < thb andrb > thb. Whenrb < thb, ∆S is positive, meaning we increase the traffic.
As shown in Fig.12–4, in this case the collision probability is very small and all the traffic
gets through, thus the total throughput is approximately equal to the total traffic rate. Since
the available bandwidth is proportional tothb − rb according to equation (12.4), we may
increaseS by such that after the increase∆S, S is proportional tothb, which is the optimal
channel utilization. Actually, equation (12.5) achieves our desired increase as it can be
easily seen that∆S + S
thb
=S
rb
(12.6)
Therefore,rb will approachthb after one average control intervalci when all the nodes in
the neighborhood increase the total traffic rate according to equation (12.5).
Whenrb > thb, ∆S is negative, meaning we decrease the traffic. In this case, however,
the linear relationship between the available bandwidth andrb no longer exists, and the
collision probability increases dramatically as the total traffic rate increases. In addition,
when the node enters into saturation, both collision probability andrb amount to their
maximum values and do not change as the traffic increases, although the total throughput
decreases. It thus appears that ideally, WCCP needs to aggressively decrease the total
traffic rate. However, since it is difficult to derive a simple relationship between the traffic
rate andrb when rb > thb, WCCP uses the same linear function as for the caserb <
thb. This will not affect the performance of WCCP significantly as long as we guarantee
the increase in the total traffic rate is appropriate as suggested bythb and the choice of
thb is a little conservative as discussed in the previous section. Indeed, this brings two
advantages. First, as the increase and decrease use the same law, it is simple to implement
297
at each node. Second, opting out of aggressive decrease helps achieve smaller oscillation
in channel utilization.
12.3.4 Intra-node Resource Allocation
After calculating∆S, the change in the total traffic or the aggregate feedback at each
node, WCCP needs to apportion it to individual flows traversing that node in order to
achieve both efficiency and fairness.
WCCP relies on anAdditive-Increase Multiplicative-Decrease (AIMD)policy to con-
verge to efficiency and fairness: If∆S > 0, all flows increase the same amount of through-
put. And if ∆S < 0, each flow decreases the throughput proportionally to its current
throughput.
Before determining the positive feedback when∆S >= 0, WCCP needs to estimate
the number of the flows passing through the considered node. Again, since the channel is
shared by both input and output traffic, the number of flowsI used by WCCP should be
different from the real number of flows. For those flows that either originate or terminate at
the node, the node counts each as one flow, whereas for those flows that only pass the node,
the node counts each as two flows, i.e., one in and one out. For instance, in Fig.12–3(b),
I = 4 for node 2 to 8, whileI = 2 for node 1 and 9. Letrpk denote the packet sending rate
(pkt/s) of the flow which the kth observed packet during the periodci at nodei belongs to.
I can be calculated as
I =K∑
k=1
factork
rpkci(12.7)
whereK is the total number of different packets seen by nodei in ci, andfactork is equal
to 2 for those packets that arrive at nodei and then is forwarded and 1 otherwise. For
instance, in Fig.12–3(b), for node 5factork = 2 for packets belonging to flow 2 and
factork = 1 for packets belonging to flow 1 and 3. If each packet piggybacks the source
sending raterpk, the node only needs to do the summation for each received and transmitted
298
packet, i.e.,
I =K′∑
k=1
1
rpkci, (12.8)
whereK ′ is the total number of received and transmitted packets inci.
Thus, if ∆S >= 0, the increasing amount of traffic rate for each flowCp, and per
packet feedbackpfk will be
Cp =∆S
ciI(12.9)
pfk =Cp
rpkci, (12.10)
If ∆S < 0, WCCP should decrease each flow’s throughput proportionally to its cur-
rent throughput. Also notice that the per packet feedback is inversely proportional to the
expected number of packets seen by the node inci. Thus
pfk = Cnrpk
rpk × ci=
Cn
ci, (12.11)
whereCn is a constant and
K∑
k=1
factork ∗ pfk =K′∑
k=1
pfk =∆S
ci, (12.12)
Cn =∆S∑
K
factork
(12.13)
WCCP aims to make full use of the channel resource while not introducing severe
medium contention, i.e.,rb should be as close tothb as possible but never exceedthb
too much. Therefore, when the aggregate feedback of previously passing packets is equal
to ∆S, the node sets local feedback value as zero until the next control interval starts.
With this mechanism in place, the channel busyness ratiorb should be aroundthb at the
bottleneck nodes and be smaller at other nodes.
However, it is hard to converge to a fair resource allocation since the adjustment by
the multiplicative-decrease law is limited ifrb is well controlled and always close tothb.
Thus WCCP manages to transfer resource from high throughput flows to low throughput
299
flows by employing both increase law and decrease law when| ∆S |< α(∆S + S). Let
∆S+ and∆S− denote the increased and decreased traffic amount, respectively. Also, let
pf+k andpf−k denote the positive and negative feedback calculated by the increase law with
∆S+ and the decrease law with∆S−, respectively. Specifically,
∆S = ∆S+ + ∆S−, pfk = pf+k + pf−k
If 0 < ∆S < α(S + ∆S), ∆S+ = α(S + ∆S)
if 0 > ∆S > −α(S + ∆S), ∆S− = −α(S + ∆S)
(12.14)
where the adjustment by the decrease law is aboutα(S+∆S) in each control interval when
rb is aroundthb. We setα = 10% as a tradeoff between the convergence speed of fairness
and throughput.
12.3.5 End-to-End Rate-Based Congestion Control Scheme
The rate control mechanism of WCCP is illustrated in Fig.12–5. A leaky bucket (per-
mit queue) is attached to the transport layer to control the sending rate of a WCCP sender.
The permit arriving raterp of the leaky bucket is dynamically adjusted according to the ex-
plicit feedbackfb carried in the returned ACK whenever a new ACK arrives (henceforth,
ACKs refer to the transport layer acknowledgments). Namely,
rp = rp + fb. (12.15)
where the setting offb will be given below.
To enable this feedback mechanism, each WCCP packet carries a congestion header
including three fields, i.e.,rp, ci, andfb, which is used to communicate a flow’s state to
the intermediate nodes and the feedback from the intermediate nodes to the source. The
field rp is the sender’s current permit arriving rate, and the fieldci is the sender’s currently
used control interval. They are filled in by the sender and never modified in transit. The
last field,fb, is initiated by the sender and all the intermediate nodes along the path may
modify it to directly control the packet sending rate of the source.
300
Queue of packets without a permit
Queue of packets with a permit
Arriving permits at a rate of one per 1/rp sec
Maximum size is the receiver's advertised window size
Permit queue
Packet arriving queueSlast Slast+WS-1Transmitted and ACKed
TCP's Congestion Window
Slast = oldest unacknowledged packetSlast+WS-1 = highest numbered packet that can be transmittedWS=receiver's advertised window
Figure 12–5:Rate control mechanism
The WCCP sender maintains an estimate of the smoothed round trip timesrtt and
calculates the control intervalci as
ci = max(srtt, 5/rp). (12.16)
Whenrp is large, i.e.,rp > 5/srtt, ci = srtt. Otherwise, this period equals5/rp. The
value of the control interval thus ensures that, on average, there are at least 5 data packets
being transmitted in this period. If the period is too long, the adjustment of the sending
rate is sluggish to respond to the load change along the path. If the period is too short,
the estimation of the feedback over short intervals at the nodes along the path will lead
to erroneous estimates, and sometimes there may be no feedback received in one control
interval. The choice of 5 is the tradeoff between these two considerations.
Initially, when the WCCP sender sends out the first packet of a flow,rp = 0, and
ci = 0, indicating to the intermediate nodes that the sender does not yet have a valid
estimate of the smoothed round trip timesrtt. The sender also initializes thefb field to
such that if bandwidth is available, this initialization allows the sender to reach the desired
rate after oneci. When the first ACK returns, the sender setsrp = 1/rtt, calculatesci, and
301
sends out the second data packet. Thereafter, a WCCP sender sends out a data packet only
when the transmission window allows and a permit is available.
All the nodes along the flow’s path, including the WCCP sender and receiver, keep
monitoring the channel busyness ratiorb, maintain a per-node estimation-control timer
that is set to the most recent estimate of average control intervalci, and calculate the local
per packet feedbackfpk according to the rules specified in12.3.3and12.3.4. ci is updated
at the end of eachci by the following equations:
cinew =∑k
cik×ci×rpk
rpk/∑k
ci×rpk
rpk=
∑j
cijrpj
/∑j
1rpj
(12.17)
wherej is the index for each packet observed inci, k is the index for each flow,ci× rpk is
the estimated total number of transmitted packets of flowk in ci, cinew is the new estimated
value ofci. If fpk < fb, the node will setfb field in the congestion header with the value
of fpk. Ultimately, the packet will contain the feedback from the bottleneck node along
the path. When the feedback reaches the WCCP receiver, it is returned to the sender in an
ACK packet. Notice that a WCCP receiver is similar to a TCP receiver except that when
acknowledging a packet, it copies the congestion header from the data packet to its ACK.
As for the overhead, note that WCCP does not require each node to keep per-flow state
information and can scale well to any number of flows. Moreover, the feedback calculation
at each intermediate node is quite simple, only requiring a few CPU cycles for each packet.
Retransmission timerRTO will expire when there is a packet loss. Note that in ad
hoc networks, queue overflow rarely happens for TCP flows. And packet losses mainly
result from the failed transmission attempts at the MAC layer due to contention, collision,
wireless channel error, or mobility-caused route failures. Subsequently, the link breakage
will be reported to the routing protocol, which may further drop subsequent packets. Notice
in this case, the original route is broken, thus the timeout signals not only the packet loss,
but also the route breakage. To avoid long periods of pausing and hence waste of channel
302
capacity, it is wise for WCCP to send out a probe message or just retransmit the lost packet
in periodic intervals to detect whether a new route is established.
Therefore, the response of WCCP to timeout is the following. For the first timeout,
the WCCP sender retransmits the corresponding packet, double the retransmission timer,
and resetrp to 1/RTO, where RTO denotes the retransmission timeout. Note retransmitted
packets have higher priority than normal packets. In other words, the retransmitted packets
will be transmitted when the next permit arrives, no matter whether there are any other
packets in the window. For the subsequent back-to-back timeouts before a new acknowl-
edgements arrives, WCCP does not double its retransmission timer again, and nor does it
resetrp. It also records the time when the retransmission timer expires to differentially
treat the feedback information carried by the ACKs that arrive after the timeout and route
repair. The feedback in those ACKs that acknowledge the packets that are sent prior to the
timeout is simply ignored, since it is very likely the feedback was calculated before the
route failure and hence becomes outdated. By contrast, the feedback in those ACKs that
acknowledge the packets that are sent later than the timeout are used to adjust the permit
arriving rate as normal.
12.4 Performance Evaluation
In this section, we demonstrate through extensive simulations that WCCP outperforms
TCP in the multihop ad hoc networks. In contrast to TCP, the new protocol dampens the
oscillations of channel utilization, quickly converges to high utilization, short round trip
time, small queue size, and fair bandwidth allocation.
We use network simulator ns 2.27 to conduct the simulations. The transmission range
is about 250m and the sensing range is about 550m. We set the the channel bandwidth as
2Mbps and use 1000bytes as the payload size of each DATA packet.
In the simulations, we first consider the chain topology in Fig.12–3where nodes are
separated by 200m, which is simple and allows us to clearly demonstrate the advantages
of WCCP over TCP. Then, we consider a random network topology with a large number
303
of flows, in an effort to model a more realistic network environment. The pre-computed
shortest paths are used unless otherwise indicated.
12.4.1 Chain Topology
Channel utilization and packet collision: We first consider the scenario in Fig.12–
3(a). Table12–2shows that WCCP improves the throughput by about 8% with only 14%
dropped packets.
Table 12–2:Performance of WCCP and TCP in chain topology of Fig.12–3(a)
TCP WCCPThroughput(Kb/s) 194.6 209.9Average End-to-end delay(s)0.1844 0.0757Dropping(pkt/s) 0.834 0.120
In Fig. 12–6(a), the channel busyness ratio is presented. Each point in the curves is an
average value during each second. It can be observed that WCCP converges to high link
utilization and stabilizes in a narrow range, while TCP frequently oscillates in a large range.
In fact, the stable and high channel utilization results in the improvement of throughput.
We also observe that different nodes see different channel busyness ratio. Since node 5 is
in the middle of the chain and thus encounters the heaviest collisions, its channel busyness
ratio is the largest. On the other hand, compared to node 1, node 9, as a destination, does
not transmit any DATA packets, so it observes the smallest channel busyness ratio.
Fig. 12–6(b)demonstrates that WCCP has a much smaller round trip time,rtt, than
TCP. For WCCP, the average value ofrtt is 0.1228s, as opposed to0.2646s for TCP. Fig.
12–6(c)shows that WCCP maintains a much smaller queue size at all the nodes than TCP.
In addition, as pointed out earlier, a large queue size keeps node busy with contending the
channel, which increases contention and causes packets to be dropped. Thus, a small queue
size is desirable. This also explains why TCP has a much larger packet dropping rate (in
pkts/s) than WCCP.
304
0 50 100 150 200 250 3000
0.5
1
0 50 100 150 200 250 3000
0.5
1
Time (s)
WCCP
TCP
node 5 node 1 node 9
node 5 node 1 node 9
(a) Channel busyness ratio
0 50 100 150 200 250 3000
0.5
1
0 50 100 150 200 250 3000
0.5
1
Time (s)
WCCP
TCP
(b) Round trip time (s)
1 2 3 4 5 6 7 8 90
2
4
6
Node ID
WCCPTCP
(c) Average queue length
Figure 12–6:Simulation results for the nine-node chain topology with one flow
305
0
200
400
Thr
ough
put
(Kbp
s)
Aggregate
0
0.2
0.4
E2E
Del
ay (
s) AverageFlow 1Flow 2Flow 3
TCP WCCP
TCP WCCP
flow id
1 2
3
1 2 3
1
2
3
1 2
3
Figure 12–7:Performance of scenario Fig.12–3(b)
Fairness: In this simulation, we illustrate how WCCP addresses the unfairness prob-
lems illustrated in Section12.2.3. The simulation uses the scenarios in Fig.12–3(b) and
(c).
In Fig. 12–7, we observe that TCP completely fails to guarantee fairness for the
flows. Especially, flow 2 takes the smallest share and flow 3 takes the largest share in terms
of throughput. To simplify the explanation, we only consider the forward path for data
packets, since the transmission of data packets is much longer than that of short ACKs. In
the 9-node chain, nodei+3 (1 6 i 6 6) is the hidden terminal of nodei, because the former
cannot sense the transmission from the latter but will interfere with the latter’s intended
receiver. Obviously, these three flows have different numbers of hidden terminals. Along
the path of flow 3, only node 8 is a hidden terminal of node 5, while the other two flows,
especially flow 2, suffer severe interference due to multiple hidden terminals. Accordingly,
those flows have different throughput as shown in the above, since TCP is unable to ensure
fairness.
By contrast, WCCP is able to allocate fair throughput to each flow. The reason is that,
by monitoring the channel busyness ratio and each flow’s traffic, WCCP can accurately cal-
culate the available bandwidth of the channel and fairly assign it to each flow. Also, since
WCCP controls each flow’s input traffic and hence the channel utilization, it successfully
306
0
200
400
600
800
1000
1200
Thr
ough
put
(kbp
s)
Aggregateflow1flow2flow3flow4flow5flow6
0
0.2
0.4E
2E D
elay
(s)
Average
TCP WCCP
TCP WCCP
flow id
1 2 3 4 5
6
1 2 3 4
5
6
1
2 3 4 5
1 2 3 4
5
6
6
Figure 12–8:Performance of scenario Fig.12–3(c)
reduces the MAC collision. Therefore, we also discover flow 2 has less dropped packets
than in the case of TCP.
We also simulate the scenario in Fig.12–3(c). Fig.12–8demonstrates that TCP favors
short flows, especially the one or two-hop flows, and penalizes long flows. For the one or
two-hop flows, since each node along the path can sense other node’s transmission, there is
no hidden terminal within the path. If there is no other competing one or two-hop flow in
the neighborhood, they turn out to seize all the bandwidth and obtain high throughput. Flow
6 is such a two-hop flow and achieves the maximum throughput as if there were no other
flows in the neighborhood. As a victim, flow 1 encounters severe contention from flow 6
and gains no throughput at all, although there is a pre-computed shortest route available.
Other four two-hop flows compete with each other along the same path and approximately
fairly share the channel with a little variation, as seen from their throughput.
With WCCP, we see that the starving problem for long flows is resolved. Flow 1
achieves almost the same throughput as flow 2-5, which share the same bottleneck, namely,
the node with the maximum number of flows. Also, flow 6 takes all the channel capacity
except flow 1’s share. Therefore, WCCP approaches the max-min fairness for this scenario
as discussed in Section12.2.3.
307
4 8 12 16 200
1
2
3
4
(a) Number of flows
(Kbp
s)
4 8 12 16 200
5
10
15
20
25
30
35
(b) Number of flows
(Kbp
s)
4 8 12 16 200
100
200
300
400
500
600
(c) Number of flows
(Kbp
s)
4 8 12 16 2010
1
102
103
104
105
(d) Number of flows
wccptcp
Figure 12–9:Simulation results for random topology with precomputed paths: (a) min-imum flow throughput in 20 runs, (b) minimum flow throughput averaged over 20 runs,(c) maximum flow throughput averaged over 20 runs, (d) ratio of averaged maximum flowthroughput to averaged minimum flow throughput.
Tradeoff between Throughput and Fairness:The difference in the aggregate through-
put for TCP and WCCP shown in Fig.12–8 confirms that there is a tradeoff between
throughput and fairness: fairness is improved at the expense of the aggregate throughput
when the one or two-hop flows and the flows with longer flows coexist in the network.
Since long flows consume more resource than short flows do when transmitting the same
amount of traffic, if we grant all the flows the same throughput, some resource has to be
taken from short flows to supply long flows. Thus, short flows will suffer throughput loss.
Furthermore, long flows mean more hidden terminals and more MAC collision, and hence
incur more nodes into the MAC contention. An additional amount of resource is thus con-
sumed by the coordination of the channel access. In this scenario, max-min fairness is
approached with a sacrifice of1/4 of aggregate throughput as discussed in Section12.2.3.
End-to-End Delay: All the above simulation results demonstrate that WCCP always
achieve significantly shorter end-to-end delay than TCP does. We also observe that in
WCCP, the end-to-end delay is proportional to the flow length. This illustrates that WCCP
maintains a very small queue size at each node and greatly alleviates the MAC contention.
As a result, the queueing delay and the delay caused by channel contention is very small,
compared with those of TCP.
308
4 8 12 16 200
0.5
1
1.5
(1) Number of flows
4 8 12 16 200
0.2
0.4
0.6
(2) Number of flows
4 8 12 16 200
0.5
(3) Number of flows
E2E
del
ay (
s)
wccptcp
(a) Precomputed paths
4 8 12 16 200
0.5
1
1.5
(1) Number of flows
4 8 12 16 200
0.2
0.4
0.6
(2) Number of flows
4 8 12 16 200
0.5
(3) Number of flows
wccptcp
(b) AODV
Figure 12–10:Simulation results averaged over 20 runs in the random topology: (1) ag-gregate throughput (Mbps), (2) fairness index, (3) end-to-end delay (s).
12.4.2 Random Topology
In this simulation, a random network topology is used. 50 nodes are randomly de-
ployed in a 300m by 1500m field. The results are averaged over 20 runs.
Fig. 12–9 shows there are always some TCP flows having been starved while all
WCCP flows can obtain a certain amount of throughput. The ratio of average maximum
flow throughput to average minimum flow throughput is decreased by up to 1000 times.
Clearly, WCCP completely eliminates the starvation problem. Furthermore, Fig.12–10(a)
shows that WCCP improves the Jain’s fairness index by about 0.1 at a price of20% ∼ 45%
drop in aggregate throughput, and the end-to-end delay is decreased by8 ∼ 10 times.
Similar results are also observed if the on demand routing protocol AODV is used, as
shown in Fig.12–10(b).
12.5 Conclusions
Congestion control is critical to reliable transport service in wireless multihop ad hoc
networks. Unfortunately, traditional TCP suffers severe performance degradation and un-
fairness. Realizing that the core cause is the poor interaction between traditional TCP and
309
the MAC layer, we propose a systematic solution named Wireless Congestion Control Pro-
tocol (WCCP) to address this problem in both layers. The major contribution of this work
is three-fold. First, we use simulation studies to show that window-based congestion con-
trol mechanism, say that of TCP, results in poor and unstable performance due to unique
medium contention and hence argue that rate-based congestion control may be more appro-
priate for ad hoc networks. Second, we show that channel busyness ratio is a good sign of
network congestion and available bandwidth at the MAC layer, thus can be used as an ex-
plicit and precise feedback by the transport layer in ad hoc networks. Third, we propose an
end-to-end congestion control protocol, which uses channel busyness ratio to allocate the
shared resource and accordingly adjusts the sender’s rate so that the channel capacity can
be fully utilized and fairness is improved. We evaluate WCCP in comparison with TCP in
various scenarios. The results show that our scheme outperforms traditional TCP in terms
of channel utilization, end-to-end delay, and fairness, and solves the starvation problem of
TCP flows.
CHAPTER 13CONCLUSIONS AND FUTURE WORK
Cross-layer design has become a popular term in the last few years due to the fact
that the traditional layering network design fails miserably when it is applied to the wire-
less environments, particularly in the ad hoc networks. Because of the unreliable and
unpredictable nature of the channels, most cross-layer design focuses on the physical layer
and other network layer, which can be typically seen as link adaptation or opportunis-
tic scheduling or other channel-aware schemes. There was really not much significant
progress in systematic cross-layer design approaches which can work effectively while
still preserving the design simplicity and scalability advocated by the layering design ap-
proach. Although there were some works on cross-layer optimization, they are either too
complicated to solve or the resulting solutions become too simple to be practical.
In this dissertation, we propose a new cross-layer design approach which is based
upon an in-depth understanding of performance and design issues of MAC layer. The MAC
layer is the anchoring layer in our approach and it really connects the physical layer and the
higher networking layer because it can reach the physical layer to gather link information
while accessing higher layer service information. As we have shown, our approach can well
address medium contention, QoS provisioning, fairness, congestion control and routing
inefficiency while the traditional layering network design does not address well. In this
section, we list two important issues, fairness and QoS, in MANETs, which can be further
studied and addressed following the same approach in this work.
13.1 Fairness in Mobile Ad Hoc Networks
In MANETs, there are several unique characteristics that make it very difficult to
achieve, or even consistently define, the notion of fairness. First, the contention for the
wireless channel is location-dependent. Transmission of a packet involves contention over
310
311
AF1
B
C
DE
F
F1 F2
F3
F4F5
F6
F2
F3
F6
F5F4
(a) (b)
Figure 13–1:An original topology and its flow contention graph
the joint neighborhoods of the sender and the receiver. And the level of contention for
the wireless channel in a geographical region is dependent on the number of contending
nodes and traffic status in the region. Second, there is a tradeoff between channel utiliza-
tion and fairness. Spatial reuse of the channel bandwidth can be achieved by scheduling
simultaneous transmissions whose regions are not in conflict. However, achieving fairness
requires allocating the channel to a flow with a large contention for a certain time share,
which correspondingly reduces the channel reuse. Third, since there is no centralized con-
trol, no station is guaranteed to have accurate knowledge of the contention even in its own
neighborhood due to the dynamic traffic and topology of MANETs. As a result, it is very
difficult to design mechanisms to achieve fairness.
Many papers, such as [98, 105, 67, 43], began to use the flow contention graph to
study the flow fairness in MANETs. Fig.13–1shows an original topology and its flow
contention graph. There are six flows, each lying between a pair of neighboring nodes.
Clearly, at any time there are at most two flows that can transmit simultaneously without
colliding with each other, such as F1 and F4. Translating this restraint into flow contention
graph, we can see that there is no link between the two corresponding vertexes. Fairness is
achieved by scheduling the same channel resource to the flows which have the same level
of contentions in the contention graph, if possible.
The tradeoff between fairness and channel utilization can be defined as an optimiza-
tion problem:
MAX
N∑i=1
wifi(xi), (13.1)
312
where N is the number of flows,xi is the rate for flowi, fi(xi) is a strictly concave utility
function, andwi(> 0) is to provide weighted fairness or service differentiation. Note the
solutionxi of this problem must correspond to a feasible scheduling to achieve it. The
utility function f(x) can be defined in terms of flow ratex as:
fα(x) =
log x,
(1− α)−1x1−α,
if α = 1
otherwise(13.2)
It is shown that the flow rate allocation will approach the system’s optimal fairness as
α → 0, the proportional fairness asα → 1, and the max-min fairness asα →∞.
Since the optimal solution of the above problem depends on global topology, and is
difficult to achieve in MANETs, several sub-optimal and distributed solutions were pro-
posed. In the papers [98, 67], the schemes require information to be exchanged between
neighbors to construct a local flow contention graph, and accordingly coordinate the chan-
nel access. The scheme in the paper [98] schedules a delay in the backoff procedure of
MAC layer according to the flow degree. In the paper [67] the minimal contention window
size of backoff timer is dynamically adjusted based on the obtained share of bandwidth. In
contrast, PFCR (Proportional Fair Contention Resolution) [105] and FMAC (Fair MAC)
[43] do not need any knowledge of the topology of the network. PFCR introduces a
NO CONTEND state and begins contending for the channel with a probability ofxi
when a flow has a packet to transmit and the channel is idle. And it observes the experi-
enced contention and accordingly adjustsxi. The basic idea of FMAC is trying to let each
flow transmit exactly one packet in a time intervalt whose length changes with the load of
the network or the contention context. The number of transmissions in the time intervalt
serves as the feedback signal to adjust the contention window or the time interval.
All these schemes achieve better fairness than the IEEE 802.11 with more or less
sacrifice of aggregate throughput in certain topologies. However, they are all limited to one-
hop flows. This is because, although multihop flows are not unusual in MANETs, defining
313
and achieving fairness for multihop flows turns out to be a very complicated issue. One
of the reasons is that fairness with respect to end-to-end flow rate is tightly coupled with
higher layer protocols, such as routing and congestion control. We have already proposed
a distributed scheme to provide fairness in WLANs in Chapter5. In the future, we will
investigate approaches to improve the fairness among multihop flows in MANETs.
13.2 Quality of Service in Mobile Ad Hoc Networks
While supporting real-time applications with appropriate QoS in MANETs is desir-
able, it seems to be a formidable task considering network topology and traffic load dynam-
ically change in MANETs, making connection state maintenance and bandwidth reserva-
tion extremely difficult. In response, current research mainly focuses on providing service
differentiation rather than strict QoS by using distributed control at the MAC layer.
Service differentiation at the MAC layer can be achieved by assigning different chan-
nel access opportunities to different types of traffic. Different backoff contention window
and DIFS are widely used as differentiation techniques for such purposes. For example,
in the Enhanced Distributed Coordination Function (EDCF) of IEEE 802.11e draft [72],
traffic is divided into eight categories or priority levels. Before transmitting, each node
needs to wait for the channel to be idle for a period of time associated with its correspond-
ing traffic category called Arbitration Interframe Space (AIFS). Typically, a shorter AIFS
and a smaller backoff contention window are associated with a traffic category with higher
priority, by which EDCF establishes a probabilistic priority mechanism to allocate band-
width based on traffic categories. In the paper [81], similar differentiation mechanisms are
also adopted to associate each packet with a different priority, which is determined based
on packet arrival time and packet delay bound. In this way, delay-sensitive traffic is better
supported.
Besides prioritized channel access, admission control for the real-time traffic is an-
other powerful tool to support better QoS. It can effectively keep the congestion of the
channel at a low level and reduce long queueing delay. A distributed admission control
314
algorithm [125] was proposed for a multicell topology where each cell has a base station.
Both data and real-time traffic are considered. This scheme relies on two algorithms, i.e.,
virtual source (VS) and virtual MAC (VMAC), to measure the channel state. In both VS
and VMAC algorithms, a virtual packet was put in the MAC layer or the queue. Virtual
packets are scheduled to transmit on the radio channel the same way as a real packet,
which means channel testing and random backoff are performed when necessary. A virtual
packet, however, is not really transmitted when the VMAC decides it wins the channel.
When the estimated delay by both VS and VMAC exceeds 10 ms, new real-time sessions
are denied service. In contrast, no admission control is applied to data traffic. Note that in
addition to call admission control, real-time traffic is assigned smaller backoff contention
window than data traffic. In the paper [4], a stateless wireless ad hoc networks (SWAN)
model was proposed for MANETs. SWAN uses local rate control for best-effort traffic, and
sender-based admission control for real-time UDP traffic to deliver service differentiation.
We have already proposed in Chapter4 a call admission and rate control scheme to
provide statistical QoS guarantee in wireless LANs. In the future, we will investigate
approaches to support better QoS than service differentiation in MANETs.
REFERENCES
[1] I. Ada and C. Castelluccia, “Differentiation mechanisms for IEEE 802.11,”IEEEINFOCOM’01, Anchorage, Alaska, April 2001.
[2] Y. Afek, Y. Mansour. Z. Ostfeld, “Phantom: a simple and effective flow controlscheme,”ACM SIGCOMM, Stanford, California, 1996.
[3] H. Adiseshu, G. Parulkar, and G. Varghese, “A reliable and scalable striping proto-col,” Proc. ACM Sigcomm, Stanford, California, Aug. 1996.
[4] G.S. Ahn, A.T. Campbell, A. Veres, and L.H. Sun, “Supporting service differen-tiation for real-time and best effort traffic in stateless wireless ad hoc networks(SWAN),” IEEE Transactions on Mobile Computing, Vol. 1, No. 3, pp. 192-207,2002.
[5] G. Apostolopoulos, R. Guerin, S. Kamat, A. Orda, T. Przygienda, and D. Williams,“Qos routing mechanisms and OSPF extensions,” inRFC 2676, Internet Engineer-ing Task Force, August 1999.
[6] B. Awerbuch, D. Holmer, and H. Rubens, “The medium time metric: High through-put route selection in multi-rate ad hoc wireless networks,”To appear in the KluwerMobile Networks and Applications (MONET) Journal Special Issue on “InternetWireless Access: 802.11 and Beyond.”
[7] L. Bao, J.J. Garcia-Luna-Aceves, “Distributed channel access scheduling for ad hocnetworks,” Submitted for publication inIEEE Transactions on Networking.
[8] Y. Bejerano, S.-J. Han, , and L. Li, “Fairness and load balancing in wireless LANsusing association control,” inACM MobiCom, Philadelphia, Pennsylvania, USA,Sept. 2004.
[9] Y. Bejerano and R. Bhatia, “MiFi: a framework for fairness and QoS assurance incurrent IEEE 802.11 networks with multiple access points,” InProc. IEEE INFO-COM, Hong Kong, 2004.
[10] D. Berger, Z. Ye, P. Sinha, S. V. Krishnamurthy, M. Faloutsos, S. K. Tripathi, “TCPfriendly medium access control for ad-hoc wireless networks: alleviating self con-tention,”Proc. IEEE MASS, Fort Lauderdale, Florida, USA, Oct. 2004.
[11] D. Bertsekas and R. Gallager,Data networks, Second Edition, Prentice Hall, Engle-wood Cliffs, NJ, 1992.
315
316
[12] P. Bhagwat, P. Bhattacharya, A. Krishna, and S. Tripathi, “Enhancing throughputover wirelss LANs using channel state dependent packet scheduling,”IEEE INFO-COM, San Francisco, California, USA, March 1996.
[13] V. Bharghavan, “Performance evaluation of algorithms for wireless medium access,”Proc. IEEE International Computer Performance and Dependability Symposium,Durham, North Carolina, 1998.
[14] V. Bharghavan, A. Demers, S. Shenker, and L. Zhang, “MACAW: a media accessprotocol for wireless LAN’s,”Proc. of ACM SIGCOMM, London, UK, 1994.
[15] G. Bianchi, “Performance analysis of the IEEE 802.11 distributed coordination func-tion,” IEEE J. Sel. Areas Commun., vol. 18, pp. 535-547, Mar. 2000.
[16] G. Bianchi, L. Fratta, and M. Oliver, “Performance evaluation and enhancement ofthe CSMA/CA MAC protocol for 802.11 wireless LANs,” InProc. IEEE PIMRC,Taipei, Taiwan, 1996.
[17] G. Bianchi and I. Tinnirello, “Kalman filter estimation of the number of competingterminals in an IEEE 802.11 network,” inIEEE INFOCOM, San Franciso, CA, USA,2003.
[18] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, “RFC 2475 Anarchitecture for differentiated services,” Internet Engineering Task Force, 1998.
[19] J. Broch, D.A. Maltz, D.B. Johnson, Y. Hu, and J. Jetcheva, “A performance com-parison of multihop wireless ad hoc network routing protocols,”Proc. ACM/IEEEMobiCom, Dallas, Texas, USA, Oct. 1998.
[20] F. Cali, M. Conti, and E. Gregori, “IEEE 802.11 protocol: design and performanceevaluation of an adaptive backoff mechanism,”IEEE JSAC, vol. 18, pp. 1774-1786,Sept. 2000.
[21] F. Cali, M. Conti, and E. Gregori, “Tuning of the IEEE 802.11 protocol to achieve atheoretical throughput limit,”IEEE/ACM Trans. on Networking, vol. 8, pp. 785-799,Dec. 2000
[22] F. Cali, M. Conti, and E. Gregori, “IEEE 802.11 wireless LAN: capacity analysis andprotocol enhancement,”Proc. IEEE INFOCOM, San Francisco, CA, USA, March,1998
[23] K. Chandran, S. Raghunathan, S. Venkatesan, and R. Prakash, “A feedback-basedscheme for improving TCP performance in ad hoc wireless networks,”IEEE Per-sonal communications, vol. 8, pp. 34-39, Feb. 2001.
[24] K. Chen, Y. Xue, and K. Nahrstedt, “On setting TCP’s congestion window limit inmobile ad hoc networks,”Proc. IEEE ICC, Anchorage, Alaska, May 2003.
317
[25] X. Chen, W. Liu, H. Zhai, and Y. Fang, “Location-aware resource management inmobile ad hoc networks,” to appear inACM Wireless Networks, 2004.
[26] X. Chen, H. Zhai, and Y. Fang, “Enhancing the IEEE 802.11e in QoS support: analy-sis and mechanisms,” inProc, of The Second International Conference on Quality ofService in Heterogeneous Wired/ Wireless Networks (QShine05), Orlando, Florida,USA, Aug. 2005.
[27] X. Chen, H. Zhai, X. Tian, and Y. Fang, “Supporting QoS in IEEE 802.11e wirelessLANs,” to appear in IEEE Transactions on Wireless Communications, 2005.
[28] X. Chen, H. Zhai, J. Wang, and Y. Fang, “TCP performance over mobile ad hoc net-works,” Canadian Journal of Electrical and Computer Engineering (CJECE) (Spe-cial Issue on Advances in Wireless Communications and Networking), Vol. 29, No.1/2, p129-134, January/April 2004.
[29] ——, “A survey on improving TCP performance over wireless networks,”ResourceManagement in Wireless Networking, edited by M. Cardei, I. Cardei and D.-Z. Du,Kluwer Academic Publishers, 2004.
[30] D. W. Choi, “Frame alignment in a digital carrier system-a tutorial”,IEEE Commu-nications Magazine, Volume: 28 2, Feb. 1990
[31] S. Choi, J. Prado, S. Mangold, and S. Shankar, “IEEE 802.11e contention-basedchannel access (EDCF) performance evaluation,”Proc. IEEE ICC, Anchorage,Alaska, May, 2003.
[32] Cisco aironet 802.11a/b/g wireless LAN client adapters (CB21AG and PI21AG) in-stallation and configuration guide. Cisco Systems, Inc., 2004.
[33] D. Clark, S. Shenker, and L. Zhang, “Supporting real-time application in an in-tegrated services packet network: architecture and mechanism,”Proc. ACM SIG-COMM, Baltimore, Maryland, USA, 1992.
[34] C. Coutras, S. Gupta, and N. B. Shroff, “Scheduling of real-time traffic in ieee802.11 wireless lans,”Wireless Networks, 6(6):457-466, 2000.
[35] D. S. J. De Couto, D. Aguayo, B. A. Chambers, and R. Morris, “Performance ofmultihop wireless networks: Shortest path is not enough,” inProc. the First Work-shop on Hot Topics in Networks (HotNets-I), Princeton, New Jersey, USA, October2002.
[36] D. S. J. De Couto, D. Aguayo, J. Bicket, and R. Morris, “A high-throughput pathmetric for multi-hop wireless routing,” inProc. ACM Mobicom, San Diego, CA,USA, September 2003.
[37] J. Deng, B. Liang, and P. K. Varshney, “Tuning the carrier sensing range of IEEE802.11 MAC,” inProc. IEEE GLOBECOM, Dallas, Texas, USA, Dec. 2004.
318
[38] R. Diestel,Graph Theory, 3rd ed. Springer, New York, 2006.
[39] R. Draves, J. Padhye, and B. Zill, “Routing in multi-radio, multi-hop wireless meshnetworks,” inProc. ACM Mobicom, Philadelphia, PA, USA, September 2004.
[40] T. D. Dyer and R. V. Boppana, “A comparison of TCP performance over three rout-ing protocols for mobile ad hoc networks,”Proc. ACM Mobihoc, Long Beach, Cali-fornia, USA, Oct. 2001.
[41] K. Fall and K. Varadhan, editors,NS notes and documentation, The VINT Project,UC Berkeley, LBL, USC/ISI, and Xerox PARC, April, 2002
[42] Y. Fang and A.B. McDonald, “Cross-layer performance effects of path couplingin wireless ad hoc networks: power and throughput implications of IEEE 802.11MAC,” Proc. IEEE IPCCC, Phoenix, Arizona, USA, Apr. 2002
[43] Z. Fang and B.Bensaou, “Fair bandwidth sharing algorithms based on game theoryframeworks for wireless ad-hoc networks,”Proc. IEEE Infocom, Hong Kong, China,March 2004.
[44] C. H. Foh and M. Zukerman, “Performance analysis of the IEEE 802.11 MAC pro-tocol,” Proc. European Wireless, Florence, Italy, Feb. 2002.
[45] Z. Fu, X. Meng, and S. Lu, “How bad TCP can perform in mobile ad-hoc networks,”IEEE Symposium on Computers and Communications, Taormina, Italy, Jul. 2002
[46] Z. Fu, P. Zerfos, H. Luo, S. Lu, L. Zhang, M. Gerla, “The impact of multihop wire-less channel on TCP throughput and loss,”Proc. IEEE Infocom, San Franciso, CA,USA, Mar. 2003.
[47] C. L. Fullmer and J. J. Garcia-Luna-Aceves, “Solutions to hidden terminal problemsin wireless networks,”Proc. ACM SIGCOMM, Cannes, France, Sept. 1997.
[48] J.J. Garcia-Luna-Aceves and A. Tzamaloukas, “Reversing the collision-avoidancehandshake in wireless networks,”Proc. ACM/IEEE Mobicom, Seattle, Washington,USA, Aug. 1999.
[49] H. Garcia-Molina, “Elections in a distributed computing system,”IEEE Trans.Comp., vol. 31, no. 1, Jan. 1982.
[50] M. Gerla, R. Bagrodia, L. Zhang, K. Tang, and L. Wang, “TCP over wireless mul-tihop protocols: simulation and experiments,”Proc. IEEE ICC, Vancouver, BritishColumbia, Canada, Jun. 1999.
[51] M. Gerla, K. Tang, and R. Bagrodia, “TCP performance in wireless multihop net-works,” Proc. IEEE WMCSA, Washington, DC, USA, Feb. 1999.
[52] S. Gobriel, R. Melhem, and D. Mosse, “A unified interference/collision analysis forpower-aware adhoc networks,” inProc. Infocom, Hong Kong, China, March 2004.
319
[53] D. Gross and C. M. Harris, “Fundamentals of queueing theory,” 3rd ed., John Wiley& Sons, Inc, New York, 1998
[54] X. Guo, S. Roy, and W. S. Conner, “Spatial reuse in wireless ad-hoc networks,” inProc. VTC, Orlando, Florida, USA, 2003.
[55] P. Gupta and P. R. Kumar, “The capacity of wireless networks,”IEEE Transactionson Information Theory, vol. 46, pp. 388–404, 2000.
[56] R. Gupta, Z. Jia, T. Tung, and J. Walrand, “Interference-aware qos routing (IQRout-ing) for ad-hoc networks,” inIEEE GLOBECOM, St. Louis, Missouri, USA, No-vember 2005.
[57] V. Gupta, S. V. Krishnamurthy, and M. Faloutsos, “Improving the performance ofTCP in the presence of interacting UDP flows in ad hoc networks,”IFIP Networking,Athens , Greece, May 2004.
[58] Z. Hadzi-Velkov and B. Spasenovski, “Saturation throughput-delay analysis of IEEE802.11 DCF in fading channel,”Proc. IEEE ICC, Anchorage, Alaska, May, 2003.
[59] Z.J. Haas and J. Deng, “Dual busy tone multiple access (DBTMA) - a multipleaccess control for ad hoc networks,”IEEE Trans. Commun., vol. 50, pp. 975-985,June 2002.
[60] Z.J. Haas and J. Deng, “Dual busy tone multiple access (DBTMA): performanceresults,”Proc. IEEE WCNC, New Orleans, Louisiana, Sept. 1999.
[61] R. Hekmat and P. V. Mieghem, “Interference in wireless multi-hop ad-hoc networksand its effect on network capacity,” inMed-Hoc-Net, Sardegna, Italy, September2002.
[62] T. Henderson, D. Kotz, and I. Abyzov, “The changing usage of a mature campus-wide wireless network,” inACM MobiCom, Philadelphia, PA, USA, Sept. 2004.
[63] M. Heusse, F. Rousseau, G. Berger-Sabbatel, and A. Duda, “Performance anomalyof 802.11b,” inIEEE Infocom, San Franciso, CA, USA, March 2003.
[64] G. Holland and N. H. Vaidya, “Analysis of TCP performance over mobile ad hocnetworks,”Proc. ACM Mobicom, Seattle, Washington, USA, Aug. 1999.
[65] G. Holland, N. Vaidya, and P. Bahl, “A rate-adaptive MAC protocol for wirelessnetworks,” InACM MOBICOM, Rome, Italy, July 2001.
[66] T. S. Ho and K. C. Chen, “Performance analysis of IEEE 802.11 CSMA/CA mediumaccess control protocol,”Proc. IEEE PIMRC, Taipei, Taiwan, 1996.
[67] X.L. Huang and B. Bensaou, “On max-min fairness and scheduling in wireless ad-hoc networks: analytical framework and implementation,”ACM Mobihoc, LongBeach, California, USA, Oct. 2001.
320
[68] IEEE standard for wireless LAN medium access control (MAC) and physical layer(PHY) specifications, ISO/IEC 802-11: 1999(E), Aug. 1999.
[69] IEEE standard for wireless LAN medium access control (MAC) and physical layer(PHY) specifications, IEEE Std 802-11b-1999, Sept. 1999.
[70] IEEE standard for wireless LAN medium access control (MAC) and physical layer(PHY) specifications, IEEE Std 802-11g-2003, June 2003.
[71] IEEE standard for wireless LAN medium access control (MAC) and physical layer(PHY) specifications, IEEE Std 802-11a-1999, 1999.
[72] Draft supplement to part 11: medium access control (MAC) enhancements for qual-ity of service (QoS), IEEE Std 802.11e/D8.0, Feb. 2004.
[73] ITU-T G.114,One-way transmission time, 1996.
[74] ITU-T G.1010,End-user multimedia QoS categories, 2001.
[75] K. Jain, J. Padhye, V. Padmanabhan, and L. Qiu, “Impact of interference on multi-hop wireless network performance,” inACM Mobicom, San Diego, CA, USA, Sep-tember 2003.
[76] Z. Ji, Y. Yang, J. Zhou, M. Takai, and R. Bagrodia, “Exploiting medium access di-versity in rate adaptive wireless LANs,” inACM MobiCom, Philadelphia, PA, USA,Sept. 2004.
[77] Z. Jia, R. Gupta, J. Walrand, and P. Varaiya, “Bandwidth guaranteed routing for ad-hoc networks with interference consideration,” in10th IEEE Symposium on Com-puters and Communications (ISCC), Cartagena, Spain, June 2005.
[78] V. Kanodia, C. Li, A. Sabharwal, B. Sadeghji, and E. Knightly, “Distributed multi-hop scheduling and medium access with delay and throughput constraints,”Proc.ACM MobiCom, Rome, Italy, 2001.
[79] S. Jamin, P.B. Danzig, S. Shenker, and L. Zhang, “A measurement-based admissioncontrol algorithm for integrated service packet networks,”IEEE/ACM Transactionson Networking, vol. 5, no. 1, Feb. 1997.
[80] A. Kamerman and L. Monteban, “WaveLAN II: A high-performance wireless LANfor the unlicensed band,”Bell Labs Technical Journal, Summer 1997.
[81] V. Kanodia, C. Li, A. Sabharwal, B. Sadeghji, and E. Knightly, “Distributed multi-hop scheduling and medium access with delay and throughput constraints,”Proc. ofACM MobiCom 2001, Rome, Italy, July 2001.
[82] P. Karn, “MACA-a new channel access method for packet radio,” inARRL/CRRLAmateur Radio 9th Computer Networking Conf, London, Ontario, Canada, 1990,pp. 134–140.
321
[83] D. Katabi, M. Handley, and C. Rohrs, “Congestion control for high bandwidth-delayproduct networks,”ACM SIGCOMM, Pittsburgh, PA, USA, 2002.
[84] V. Kawadia and P. R. Kumar, “A cautionary perspective on cross-layer design,”IEEEWireless Communications, vol. 12, no. 1, pp. 3–11, Feb. 2005.
[85] H. Kim and J. Hou, “Improving protocol capacity with model-based frame schedul-ing in IEEE 802.11-operated WLANs,”Proc. ACM MobiCom, San Diego, CA,USA, Sep. 2003.
[86] L. Kleinrock,Queueing systems, volume I, John Wiley & Sons, 1975.
[87] L. Kleinrock,Queueing systems, volume II, John Wiley & Sons, 1975.
[88] A. Kopsel and A. Wolisz, “Voice transmission in an ieee 802.11 wlan based accessnetwork,”Proc. WoWMoM, Rome, Italy, 2001.
[89] H. T. Kung, T. Blackwell and A. Chapman, “Credit-based flow control for ATMnetworks: credit update protocol, adaptive credit allocation, and statistical multi-plexing,” Proc. ACM Sigcomm, London, UK, Sep. 1994.
[90] Y. Kwon, Y. Fang, and H. Latchman, “A novel MAC protocol with fast collision res-olution for wirelss LANs,”Proc. IEEE INFOCOM, San Franciso, CA, USA, 2003.
[91] J. Li, C. Blake, D. S. J. De Couto, H. I. Lee, and R. Morris, “Capacity of ad hocwireless networks,” inProc. ACM MobiCom, Rome, Italy, July 2001.
[92] Y. Li, H. Wu, D. Perkins, N. Tzeng, and M. Bayoumi, “MAC-SCC: medium accesscontrol with a separate control channel for multihop wireless networks,”23rd Inter-national Conference on Distributed Computing Systems Workshops (ICDCSW’03),Providence, Rhode Island, USA, May 2003.
[93] Z. Li, S. Nandi, and A. K. Gupta, “Improving MAC performance in wireless ad hocnetworks using enhanced carrier sensing (ECS),” inThird IFIP Networking, Athens, Greece, 2004.
[94] A. Lindgren, A. Almquist, and O. Schelen, “Evaluation of quality of service schemesfor IEEE 802.11 wireless LANs,”Proc. Local Computer Networks, Tampa, Florida,2001.
[95] J. Liu and S. Singh, “ATCP: TCP for mobile ad hoc networks,”IEEE J. Sel. AreasCommun. vol. 19, pp. 1300-1315, Jul. 2001.
[96] S.C. Lo, G. Lee, and W.T. Chen, “An efficient multipolling mechanism for IEEE802.11 wireless LANs,”IEEE Trans. Computers, vol. 52, no. 6, pp. 764-778, 2003.
[97] S. Lu, T. Nandagopal, and V. Bharghavan, “A wireless fair service algorithm forpacket cellular networks,” inACM MobiCom, Dallas, Texas, USA, Oct. 1998.
322
[98] H. Luo and S. Lu, and V. Bharghavan “A new model for packet scheduling in mul-tihop wireless networks,”Proc. ACM Mobicom, Boston, Massachusetts, USA, Aug.2000.
[99] S. Mangold, S. Choi, P. May, O. Klein, G. Hietz, and L. Stibor, “IEEE 802.11ewireless LAN for quality of service,”Proc. European Wireless’02, Florence, Italy,Feb. 2002.
[100] P. P. Mishra and H. Kanakia, “A hop by hop rate-based congestion control scheme,”Proc. ACM Sigcomm, Baltimore, Maryland, USA, Aug. 1992.
[101] J. Mo and J. Walrand, “Fair end-to-end window-based congestion control,”IEEE/ACM Transactions on Networking, vol. 8, no. 8, pp. 556–567, Oct. 2000.
[102] J. Monks, V. Bharghavan, and W. Hwu, “A power controlled multiple access protocolfor wireless packet networks,”Proc. IEEE INFOCOM, Anchorage, Alaska, USA,April 2001.
[103] J. P. Monks, P. Sinha and V. Bharghavan, “Limitations of TCP-ELFN for ad hocnetworks,”Proc. MOMUC, Tokyo, Japan, Oct. 2000.
[104] A. Muqattash and M. Krunz, “Power controlled dual channel (PCDC) medium ac-cess protocol for wireless ad hoc networks,”Proc. IEEE INFOCOM,San Franciso,CA, USA, March 2003.
[105] T. Nandagopal, T.-E. Kim, X. Gao, and V. Bharghavan, “Achieving MAC layer fair-ness in wireless packet networks,”Proc. ACM Mobicom, Boston, Massachusetts,USA, Aug. 2000.
[106] The network simulator ns-2. http://www.isi.edu/nsnam/ns.
[107] W. Pattara-Atikom, P. Krishnamurthy, and S. Banerjee, “Distributed mechanisms forquality of service in wireless LANs,”IEEE Wireless Communications, June 2003.
[108] C. Perkins, E.M. Royer, S.R. Das, and M.K. Marina, “Performance comparison oftwo on-demand routing protocols for ad hoc networks,”IEEE Pers. Commun., vol.8, pp. 16-28, Feb. 2001.
[109] H. Perros and K. Elsayed, “Call admission control schemes: a review,”IEEE Com-munications Magazine, Nov. 1996.
[110] S. Pilosof, R. Ramjee, D. Raz, Y. Shavitt, and P. Sinha, “Understanding TCP fairnessover Wireless LAN,”Proc. IEEE INFOCOM, San Franciso, CA, USA, March 2003.
[111] J. Postel, “Internet control message protocol”, IETF RFC 792.
[112] L.P.A. Robichaud,Signal flow graphs and applications, Prentice-Hall, 1962
[113] B. Sadeghi, V. Kanodia, A. Sabharwal, and E. Knightly, “Opportunistic media accessfor multirate ad hoc networks,” inACM MobiCom, San Diego, CA, USA, Sept. 2003.
323
[114] S.-T. Sheu and T.-F. Sheu, “A bandwidth allocation/sharing/extension protocol formultimedia over IEEE 802.11 ad hoc wireless LANs,”IEEE JSAC, vol. 19, pp.2065-2080, Oct. 2001.
[115] M. Shreedhar and G. Varghese, “Efficient fair queuing using deficit round-robin,”IEEE/ACK Trans. Netw., vol. 4, pp. 375-385, Jun. 1996.
[116] S. Singh and J. Kurose, “Electing ’good’ leaders,”J. Par. Distr. Comput., vol. 18, no.1, May 1993.
[117] S. Singh and C. S. Raghavendra, “PAMAS - power aware multi-access protocol withsignalling for ad hoc networks,”Computer Communications Review, July 1998.
[118] J. So and N. H. Vaidya, “A multi-channel MAC protocol for ad hoc wireless net-works,” Technical Report, Jan. 2003.
[119] J. L. Sobrinho and A. S. Krishnakumar, “Real-time traffic over the IEEE 802.11medium access control layer,”Bell Labs Tech. J., pp. 172-187, 1996.
[120] F. Talucci and M. Gerla, “MACA-BI (MACA by invitation): a receiver orientedaccess protocol for wireless multihop networks,”Proc. IEEE PIMRC, Helsinki, Fin-land, Sep. 1997.
[121] G. Tan and J. Guttag, “Time-based fairness improves performance in multi-rate wire-less LANs,” inThe USENIX Annual Technical Conference, Boston, Massachusetts,USA, June 2004.
[122] F.A. Tobagi and L. Kleinrock, “Packet switching in radio channels: Part II–The hid-den terminal problem in carrier sense multiple-access and the busy-tone solution,”IEEE Trans. Commun., vol. COM-23, pp. 1417-1433, Dec. 1975.
[123] N. Vaidya, P. Bahl, and S. Gupta, “Distributed fair scheduling in a wireless LAN,”in ACM MobiCom, Boston, Massachusetts, USA, Aug 2000.
[124] M. Veeraraghavan, N. Cocker, and T. Moors, “Support of voice services in ieee802.11 wireless lans,” InProc.INFOCOM, Anchorage, Alaska, USA, 2001.
[125] A. Veres, A. T. Campbell, M. Barry, and L.-H. Sun, “Supporting service differentia-tion in wireless packet networks using distributed control,”IEEE JSAC, vol. 19, pp.2081-2093, Oct. 2001.
[126] M. A. Visser and M. E. Zarki, “Voice and data transmission over an 802.11 wirelessnetwork,”Proc. IEEE PIMRC, Toronto, Ont., Canada, 1995.
[127] F. Wang and Y. Zhang, “Improving TCP performance over mobile ad-hoc networkswith out-of-order detection and response,”Proc. ACM Mobihoc, Lausanne, Switzer-land, Jun. 2002.
324
[128] J. Wang, H. Zhai, W. Liu, and Y. Fang, “Reliable and efficient packet forwarding byutilizing path diversity in wireless ad hoc networks,”IEEE Military CommunicationsConference (Milcom’04), Monterey, California, USA, Nov. 2004.
[129] J. Wang, H. Zhai, and Y. Fang, “Opportunistic media access control and rate adap-tation for wireless ad hoc networks,”IEEE Communications Conference (ICC’04),Paris, June, 2004.
[130] ——, “Opportunistic packet scheduling and media access control for wireless LANsand multi-hop ad hoc networks,”IEEE Wireless Communications and NetworkingConference (WCNC’04), Atlanta, March, 2004.
[131] J. Wang, H. Zhai, Y. Fang, J. M. Shea, and D. Wu ”OMAR: Utilizing MultiuserDiversity in Wireless Ad Hoc Networks,” to appear in IEEE Transactions on MobileComputing.
[132] Y. Wang and J.J.Garcia-Luna-Aceves, “A hybrid collision avoidance scheme for adhoc networks,”Wireless Networks,vol. 10, pp. 439-436, Jul. 2004.
[133] C. Wu and V. Li, “Receiver-initiated busy-tone multiple access in packet radio net-works,” in SIGCOMM ’87: Proceedings of the ACM workshop on Frontiers in com-puter communications technology, Stowe, Vermont, USA, Aug. 1987.
[134] H. Wu, Y. Peng, K. Long, S. Cheng, and J. Ma, “Performance of reliable transportprotocol over IEEE 802.11 wireless LAN: analysis and enhancement,”Proc. IEEEINFOCOM, New York, NY, USA, June, 2002.
[135] S. Wu, Y. Tseng and J. Sheu, “Intelligent medium access for mobile ad hoc networkswith busy tones and power control,”IEEE Journal on Selected Areas in Communi-cations, Vol. 18, pp. 1647 -1657, Sept. 2000.
[136] S.-L. Wu, C.-Y. Lin, Y.-C. Tseng, and J.-P. Sheu, “A new multi-channel MAC pro-tocol with on-demand channel assignment for mobile ad hoc networks,”Int’l Symp.on Parallel Architectures, Algorithms and Networks (I-SPAN), Dallas, Texas, USA,2000.
[137] Y. Xiao, H. Li, and S. Choi, “Protection and guarantee for voice and video traffic inIEEE 802.11e Wireless LANs,”Proc. IEEE INFOCOM, Hong Kong, China, 2004.
[138] K. Xu, M. Gerla, and S. Bae, “How effective is the IEEE 802.11 RTS/CTS hand-shake in ad hoc networks?” inProc. IEEE GlobeCom, Taipei, Taiwan, 2002.
[139] K. Xu, M. Gerla, L. Qi, and Y. Shu, “Enhancing TCP fairness in ad hoc wirelessnetworks using neighborhood RED,”Proc. ACM Mobicom, San Diego, CA, USA,Sep. 2003.
[140] S. Xu and T. Safadawi, “Does the IEEE 802.11 MAC protocol work well in multihopwireless ad hoc networks?”IEEE Commun. Mag., vol. 39, pp. 130-137, Jun. 2001.
325
[141] Q. Xue and A. Ganz, “Proportional service differentiation in wireless LANs withspacing-based channel occupancy regulation,” inACM Multimedia, New York, NY,USA, Oct. 2004.
[142] X. Yang and N. H. Vaidya, “On the physical carrier sense in wireless ad hoc net-works,” in Proc. IEEE Infocom, Miami, FL, USA, March 2005.
[143] J. Yee and H. Pezeshki-Esfahani, “Understanding wireless LAN performance trade-offs.” CommsDesign.com, Nov. 2002.
[144] C. H. Yeh, “Inband busytone for robust medium access control in pervasive network-ing,” in Fourth Annual IEEE International Conference on Pervasive Computing andCommunications Workshops, Pisa, Italy, March 2006.
[145] J.Y. Yeh and C. Chen, “Support of multimedia services with the IEEE 802-11 MACprotocol,”Proc. IEEE ICC, New York, NY, USA, 2002
[146] Y. Yi and S. Shakkottai, “Hop-by-hop congestion control over a wireless multi-hopnetwork,”Proc. IEEE Infocom, Hong Kong, China, Mar. 2004.
[147] H. Zhai, X. Chen, and Y. Fang, “Alleviating intra-flow and inter-flow contentionsfor reliable service in mobile ad hoc networks,” inProc. IEEE MILCOM, Monterey,California, USA, Nov. 2004.
[148] ——, “A call admission and rate control scheme for multimedia support over IEEE802.11 wireless LANs,”The First International Conference on Quality of Servicein Heterogeneous Wired/Wireless Networks (QShine’04), Dallas, Texas, USA, Oct.2004.
[149] ——, “A call admission and rate control scheme for multimedia support over IEEE802.11 wireless LANs,”ACM Wireless Networks, vol. 12, no.4, pp. 451-463, August2006
[150] ——, “How well can the IEEE 802.11 wireless LAN support quality of service?”IEEE Transaction on Wireless Communications, vol. 4, no.6, pp. 3084-3094, Nov.2005.
[151] ——, “Rate-based transport control for mobile ad hoc networks,”IEEE WirelessCommunications and Networking Conference (WCNC’05), New Orleans, March,2005.
[152] ——, “WCCP: Improving transport layer performance in multihop ad hoc networksby exploiting MAC layer information,” accepted for publication inIEEE Transac-tions on Wireless Communications.
[153] H. Zhai, and Y. Fang, “Medium access control protocols in mobile ad hoc networks:problems and solutions,”Handbook of Theoretical and Algorithmic Aspects of AdHoc, Sensor, and Peer-to-Peer Networks, edited by J. Wu, pp. 231-250, CRC Press,2005.
326
[154] ——, “Performance of wireless LANs based on IEEE 802.11 MAC protocols,”Proc.of IEEE International Symposium on Personal, Indoor and Mobile Radio Commu-nications (PIMRC’03), Beijing, China, Sep. 2003.
[155] ——, ”Physical carrier sensing and spatial reuse in multirate and multihop wire-less ad hoc networks,” inProc. of The IEEE International Conference on ComputerCommunications (INFOCOM06), Barcelona, Spain, April 23-29, 2006.
[156] ——, ”A distributed adaptive packet concatenation scheme for sensor and ad hocnetworks,” inProc. of IEEE Military Communications Conference (Milcom05), At-lantic City, New Jersey, Oct. 17-20, 2005.
[157] ——, ”Distributed flow control and medium access control in mobile ad hoc net-works,” to appear inIEEE Transactions on Mobile Computing.
[158] ——, “Impact of routing metrics on path capacity in multi-rate and multi-hop wire-less ad hoc networks,” inProc. of The 14th IEEE International Conference on Net-work Protocols (ICNP’06), Santa Barbara, California, November 12-15, 2006.
[159] ——, “A single-channel solution to hidden/exposed terminal problems in wirelessad hoc networks,” submitted for publication.
[160] H. Zhai, Y. Kwon, and Y. Fang, “Performance analysis of IEEE 802.11 MAC pro-tocols in wireless LANs,”Wiley Wireless Communications and Mobile Computing,Special Issue on Emerging WLAN Technologies and Applications, vol. 4, pp. 917-931, Dec. 2004.
[161] H. Zhai, J. Wang, X. Chen, and Y. Fang, “Medium access control in mobile ad hocnetworks: Challenges and solutions,” invited paper inWiley Wireless Communica-tions and Mobile Computing, Special Issue on Ad Hoc Wireless Networks, vol. 6,issue 2, pp. 151-170, March 2006.
[162] H. Zhai, J. Wang, and Y. Fang, “Distributed packet scheduling for multihop flows inad hoc networks,” inProc. IEEE WCNC, Atlanta, Georgia, USA, March, 2004.
[163] ——, “DUCHA: A dual-channel MAC protocol for mobile ad hoc networks,” toappear inIEEE Transactions on Wireless Communications, 2005.
[164] ——, “Providing statistical QoS guarantee for voice over ip in the IEEE 802.11wireless LANs,”IEEE Wireless Communication Magazine (Special Issue on Voiceover Wireless Local Area Network), vol.13, issue 1, pp. 36-43, Feb. 2006..
[165] H. Zhai, J. Wang, Y. Fang, and D. Wu, “A dual-channel MAC protocol for mobilead hoc networks,”IEEE Workshop on Wireless Ad Hoc and Sensor Networks, inconjuction with IEEE Globecom 2004, Dallas, Texas, USA, Nov. 2004.
[166] H. Zhang, “Service disciplines for guaranteed performance service in packet-switching networks,”Proceedings of the IEEE, vol. 83(10), Oct. 1995.
327
[167] J. Zhu, X. Guo, L. L. Yang, and W. S. Conner, “Leveraging spatial reuse in 802.11mesh networks with enhanced physical carrier sensing,” inProc. IEEE ICC, Paris,France, June 2004.
BIOGRAPHICAL SKETCH
Hongqiang Zhai received the B.E. and M.E. degrees in electrical engineering from Ts-
inghua University, Beijing, China, in July 1999 and January 2002 respectively. He worked
as a research intern in Bell Labs Research China from June 2001 to December 2001, in
Microsoft Research Asia from January 2002 to July 2002, and in Kiyon Inc. from Septem-
ber 2005 to December 2005. Currently he is pursuing the PhD degree in the Department of
Electrical and Computer Engineering, University of Florida. His research interests include
performance analysis, medium access control, quality of service, fairness, congestion con-
trol, routing algorithms and cross-layer disign in wireless networks. He is a student member
of ACM and IEEE.
328