[ieee milcom 2005 - 2005 ieee military communications conference - atlantic city, nj, usa (17-20...

7

Click here to load reader

Upload: d

Post on 15-Apr-2017

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE MILCOM 2005 - 2005 IEEE Military Communications Conference - Atlantic City, NJ, USA (17-20 Oct. 2005)] MILCOM 2005 - 2005 IEEE Military Communications Conference - Header Compression

HEADER COMPRESSION FOR AD-HOC NETWORKS

Jesus Arango Stephen Pink Procito, Inc. Tucson, AZ

Syeed Ali US Army RDECOM

CERDEC STCD Fort Monmouth, NJ

Daniel Hampel Stefano DiPierro

Booz-Allen Hamilton Eatontown, NJ

ABSTRACT The Army is focusing on mobile ad-hoc communications for squad-level operations, sensor systems, and other networks throughout the tactical battlespace. The widespread use of real-time voice and data applications characterized by smaller packet payloads, coupled with the eventual deployment of Ipv6 will lead to larger header overhead and a corresponding reduction in bandwidth efficiency. This reduction in bandwidth efficiency can be mitigated through the use of header compression techniques. Early work on header compression focused on slow serial links and wired networks with low error rates. More recent work has addressed loss and error propagation issues associated with high error rates and long round-trip times commonly found on wireless links. However, context initialization and control-message overhead issues associated with node mobility in ad-hoc networks have not been addressed.

This paper presents an efficient header compression protocol for improved performance over mobile ad-hoc networks. The protocol uses a novel context initialization algorithm that leverages on routing information to minimize the overhead of frequent context initializations and routing messages during periods of high node mobility. It relies on a hybrid hop-by-hop/end-to-end header compression framework and uses stateless compression of control messages. The overall result is increased network capacity and line efficiency for ad-hoc environments encountered in tactical communications.

1. INTRODUCTION Recent studies [1] [2] have shown that approximately half of the packets sent across the Internet are 80 bytes long or less. This percentage has increased over the last few years in part due to a widespread use of real-time applications. In the Army context, applications such as event-driven sensor systems,

imagers and voice networks also display similar characteristics. These numbers are a major source of concern in bandwidth constrained networks considering that the overwhelming majority of these packets have at least 40 bytes in headers. Small packet payloads and relatively large header sizes translate into poor line efficiency. Line efficiency is the fraction of transmitted data that is not considered overhead. In other words, the fraction of transmitted data that have some explicit, useful purpose to the highest layer, protocol or application that generated the payload. Line efficiency is formally defined as:

payloadheaderspayloadefficiency

+=

IP version 6 (IPv6) is expected to replace IP version 4 (IPv4) since it provides expanded addressing, a simplified header format, improved support for extensions and options, flow labeling, authentication and privacy. In fact, the Department of Defense has mandated migration to IPv6 in the 2008 timeframe. However, these advantages come at a cost of increased header size, further depleting the efficiency ratio. Figure 1 summarizes some of the most common header chains as well as the size of each component within the chain. These header chains are either 40 or 60 bytes long, depending on the underlying IP protocol.

Figure 1: Common header chains and their sizes

Figure 2 shows a plot of the line efficiency as a function of payload size for header chains that are 40 and 60 bytes long, respectively.

Page 2: [IEEE MILCOM 2005 - 2005 IEEE Military Communications Conference - Atlantic City, NJ, USA (17-20 Oct. 2005)] MILCOM 2005 - 2005 IEEE Military Communications Conference - Header Compression

Figure 2: Line Efficiency vs. Payload Size

Header compression improves line efficiency by exploiting redundancies between header fields within the same packet or consecutive packets belonging to the same stream. Therefore, compression performance depends on optimal classification of the aggregate packet stream at the compressing entity into sub-streams of highly correlated packets. Correlation within a packet stream implies a highly predictable pattern of changes between consecutive packets, or that large sections of the header chain remain constant across packets of the same stream, or both. In practice, the aggregate packet stream is usually classified into network-level or transport-level flows as packets belonging to the same flow are highly correlated. For example, many fields such as addresses, ports and protocol types remain constant across all packets of the same flow, and changes in other fields are highly predictable. Accordingly, each flow is compressed independently by first sending a packet with full, uncompressed headers to establish a context that provides common knowledge between sender and receiver about static field values as well as initial values for dynamic fields. This stage is known as context initialization. Subsequent compressed headers are interpreted and decompressed according to a previously established context. Constant fields need not be sent with compressed headers. Fields that change predictably, such as sequence numbers, are encoded incrementally, requiring only a very small number of bits. Fields with random, unpredictable changes must be sent in every header. Every packet contains a context label. For context initialization headers it indicates the context being

initialized. For compressed headers it designates the context used to interpret and decompress the headers. The context label should be short enough to provide an efficient compression but long enough to support an adequate number of flows. In some situations, the number of active flows will inevitably exceed the number contexts. A context replacement strategy is necessary in such situations. Compressed headers must periodically contain or trigger incremental context changes. Therefore, packet loss or residual errors may often lead to context inconsistencies where the context of the sender is different from that of the receiver. Failure to update receiver context as a result packet loss will lead to incorrect decompression of subsequent packets, also known as loss propagation. Incorrect updates of receiver context as a result of residual errors will similarly lead to incorrect decompression of subsequent packets, also known as error propagation. Early work focused on compressing headers on slow serial links but did not perform well on channels with high error rates and long roundtrip times due to inadequate mechanisms to prevent and react to context inconsistencies. Recent work has addressed these context synchronization issues by reducing context inconsistencies and reacting faster and more efficiently to loss propagation and error propagation. Ad-hoc networks create additional challenges such as context initialization overhead and packet reordering issues associated with node mobility. The dynamic nature of ad-hoc networks has a negative impact on header compression efficiency. Paths change frequently as nodes move around. New contexts must be established on alternate links before diverting packets through different paths. As the average node speed increases the topology becomes more dynamic and the problem worsens. This paper presents an efficient header compression protocol for improved performance over mobile ad-hoc networks. The protocol uses a novel context initialization algorithm that leverages on routing information to minimize the overhead of frequent context initializations and routing messages during periods of high node mobility. It relies on a hybrid hop-by-hop/end-to-end header compression framework and uses stateless compression of control messages. The overall result is increased network

Page 3: [IEEE MILCOM 2005 - 2005 IEEE Military Communications Conference - Atlantic City, NJ, USA (17-20 Oct. 2005)] MILCOM 2005 - 2005 IEEE Military Communications Conference - Header Compression

capacity and line efficiency even in the presence of rapid path fluctuations.

2. HYBRID HEADER COMPRESSION We propose a hybrid hop-by-hop/end-to-end header compression framework as a more suitable approach for mobile ad-hoc networks. A hybrid approach allows the end-to-end compression of transport-level and application-level headers. Periodic context initializations are not required because end-to-end context initialization is oblivious to path fluctuations. End-to-end headers need not be compressed on a hop-by-hop basis because they need not be examined by intermediate nodes. Network-level headers, on the other hand, are used to perform packet forwarding and must be available in their uncompressed state at each intermediate node. IP headers must therefore be compressed on a hop-by-hop basis. Figure 3 illustrates the hybrid header approach and its corresponding IP network stack model. An end-to-end protocol compresses the transport-level and application-level headers while a hop-by-hop protocol compresses the network-level headers.

Figure 3: A hybrid approach to header

compression Each pair of adjacent nodes on the path between the source and destination maintains a synchronized hop-by-hop context to compress and decompress the network headers, while the source and destination maintain and end-to-end context to compress and uncompress end-to-end headers. The end-to-end context is never lost due to topology changes.

3. HOP-BY-HOP CONTEXT INITIALIZATION Hop-by-hop context initialization must still be performed for IP headers, and its associated overhead increases with frequent path fluctuations. Most of the context initialization overhead can be attributed to IPv6 addresses and little would be gained unless specific measures are taken.

We solve this problem by caching address information transmitted in the routing messages in order to reduce the size of context initialization headers. We focus our discussion specifically on AODV as described in [8] and [9]. AODV uses a broadcast route discovery mechanism to create routes on demand. A source node wishing to communicate with a particular destination node initiates a path discovery by broadcasting a route request message (RREQ) that contains among other things the IP address of the destination node. The RREQ message is re-broadcasted by each receiving node until it reaches a node that has a route to the destination (possibly the destination itself). This node will send a unicast route reply message (RREP) to the source that will follow the reverse route established by the RREQ message. As the RREP message makes its way to the source, each intermediate node will establish the forward path that will eventually be used to send packets from source to destination. Our context initialization algorithm caches address information transmitted in route request and route reply messages so that address fields do not have to be sent during context initialization of IP header compression. The algorithm works as follows. Each RREQ message contains the originator or source address s, and the destination address d. The algorithm additionally tags each RREQ message with a small number l called an address label. The tuple (l, s, d) is also added to an address cache, effectively associating the address label with a (source, destination) pair. However, it is not safe to use address label for context initialization until the association between the label and the address pair has also been confirmed by the downstream with a route reply. The cache entry is marked as “active” once the corresponding route reply is received. The size of the address label should be considerably smaller than the size of both IP addresses combined. Our implementation uses a 7 bit address label while the combined size of the source and destination address in IPv6 is 32 bytes. Although we are also increasing the size of the RREQ message, the actual transmission size is in fact smaller. As explained later, our hop-by-hop header compression protocol also compresses routing messages. The address labels are local to each link, meaning that as the RREQ message travels along the path each

Page 4: [IEEE MILCOM 2005 - 2005 IEEE Military Communications Conference - Atlantic City, NJ, USA (17-20 Oct. 2005)] MILCOM 2005 - 2005 IEEE Military Communications Conference - Header Compression

node changes the RREQ’s label to a value of its choice. Because the set of address labels is small, they will eventually need to be reused for transmitting new RREQ messages. Therefore, the nodes maintain a counter for each cache entry that counts the number of times the label has been used to initialize a context. A replacement algorithm selects the label with the smallest counter. When a new flow is detected by the compressor in the, it checks to see if the source and destination addresses are currently associated with a label and transmits the label as part of the context initialization message..

3.1 Address Label Selection When performing hop-by-hop IP context initialization for a particular flow, the link’s upstream node is faced with the decision of which address label to include in the context initialization header. Routing requests do not necessarily traverse the entire path between a flow’s source and destination as it may very well be possible that an intermediate node already has a route to the destination. In such cases the route request is not forwarded any further and a route reply is generated by the intermediate node. Not surprisingly, different scenarios arise in the selection of the address tags. The different scenarios can be readily identified by analyzing the example depicted in Figure 4. Three source nodes (s1, s2, s3) and two destination nodes (d1, d2) are bridged by a common link (a,b). In this example we will analyze several route requests for various (source, destination) pairs and we are particularly interested in what address labels are available for context initialization on link (a,b). The dotted lines on each link represent route requests that have been successfully acknowledged with a route reply. These lines may be connected with dotted arcs to illustrate the acknowledged path between (source, destination) pairs. Special emphasis is made that these are acknowledged route requests as only the address labels of acknowledged requests may be used in context initialization to ensure correctness. Each route request is labeled with a tuple (s,d,t) where s and d are the source and destination associated with the route request and t represents the time when the route request was acknowledged on the particular link. Note that route replies always follow the reverse

path of their corresponding route request, so for two adjacent links (i,j) and (j,k) on a forward path it must be the case that ti,j > tj,k. Since each route request in our example is labeled with a unique timestamp we will also use the timestamps as address labels to reduce the amount of notation. Figure 4(a) illustrates the route discovery for (s1,d1) and for (s2, d2). There is no previous routing state, consequently route requests travel all the way to from s1 to d1 and from s2 to d2, respectively. When the first data packet for each flow arrives at node a, context initialization is be performed by transmitting address label ls,d representing the source and destination addresses. The address tag for (s1,d1) is ls,d = ls1,d1 = t2. Similarly, the address tag for (s2,d2) is ls,d = ls2,d2 = t5.

Figure 4: Label selection scenarios on link (a,b)

A different situation arises during route discovery for (s2, d1) depicted in Figure 4(b). A route entry for d1 already exists in node a, thus no route request is transmitted on link (a,b) and a route reply is generated by node a. Consequently there is no label associated with (s2, d1) on link (a,b). The best alternative is to send a tuple (ls, ld) where ls is the label associated with the source address and ld is the label associated with the destination address. In this particular example (ls, ld)=(ls2, ld1)=(t5, t2). Route discovery for (s3,d2) represents another situation and it is also portrayed in Figure 4(b). Analogous to the previous case, a route entry for d2 already exists in node a, but what is unique about this

Page 5: [IEEE MILCOM 2005 - 2005 IEEE Military Communications Conference - Atlantic City, NJ, USA (17-20 Oct. 2005)] MILCOM 2005 - 2005 IEEE Military Communications Conference - Header Compression

case is that there is no label on link (a,b) associated with source s3. The best alternative is in this case is to send a tuple (s, ld) where s is the source address and ld is the label associated with the destination address. In this particular example (s,ld)= (s3,ld2)= (s3, t5). There are only two other scenarios and they are of the form (ls,d) and (s,d), respectively. They both involve cases where no address tag exists for the destination. These two scenarios are theorically not possible because a routing request for said destination must have been transmitted and acknowledged before any data packets may be transmitted on the link. That is unless label replacement is used, and indeed label replacement is essential when using a limited label space in an effort to reduce the amount of bits transmitted. The different labeling scenarios that have just been discussed are summarized in Table 1 with their corresponding (source, destination) tuples and label values from the example in Figure 4. The label encoding scheme for context initialization must allow the efficient encoding of any of these scenarios.

Scenarios Example Label Values

ls,d (s1, d1) t2 ls, ld (s2, d1) (t5, t2) s, ld (s3, d2) (s3, t5) ls,d ------ ------ s,d ------ -------

Table 1: address tag scenarios

4. COMPRESSING CONTROL MESSAGES We propose a stateless header compression for routing messages. It is called stateless because the state of the context is fixed, it and does not change with time. In other words, the state is part of the protocol. For example, the stateless compression of RREQ messages is based on the observation that the term "AODV RREQ message" connotes a specific type of packet with a predetermined set of headers, all with a subset of fields fixed to very specific values. These values are normally there to maintain a structured architecture with independent network layers, but header compression is all about violating this reference model for the sake of efficiency. If the

compressor can tell the decompressor that a RREQ message is encoded inside the packet, then there is no need to send all the fields that always have the same fixed values for RREQ messages. This is accomplished by reserving (hard-wiring) context zero for the stateless compression of control messages. To support a small set of n different types of control messages, a small type field of size log2n bits is used. The tuple (0,type) tells the decompressor precisely the type of control message encoded in the packet. Type zero is reserved for RREQ messages. A careful analysis of the AODV message format shown in Figure 5 reveals a considerable amount of intra-packet redundancy. All the shaded fields can be compressed away. The fields with diagonal patterns can also be compress under certain circumstances.

Figure 5: RREQ Message Format for IPv6

Stateless compression is also used to compress route reply (RREP) messages in a similar manner. Figure 6 shows the RREP message with the compressible fields shaded. The address fields in the RREP header can be omitted by using the address cache previously introduced. When a node broadcasts a route request message, its compressor is implicitly making a commitment with all its neighbors to keep the addresses in the cache for at least CACHE_COMITMENT_TIME. Any adjacent node responding with a RREP knows that if less than CACHE_COMITMENT_TIME has passed then certainty the immediate recipient has a cache entry with the necessary addresses.

Page 6: [IEEE MILCOM 2005 - 2005 IEEE Military Communications Conference - Atlantic City, NJ, USA (17-20 Oct. 2005)] MILCOM 2005 - 2005 IEEE Military Communications Conference - Header Compression

Figure 6: RREP Message Format for IPv6

5. EXPERIMENTAL RESULTS We have conducted several simulation studies to analyze the performance of our algorithms. We simulated an ad-hoc network carrying multiple VoIP sessions. Each session is assumed to be a two-way conversation with data flowing in both directions. Flows are assumed to be H.323 compliant and using G-723 audio codecs, which typically generate 20-byte payloads every 33 milliseconds. Nodes move in a 400x400 m2 network space at an average speed of 2 meters per second, using the random waypoint mobility model. Wireless nodes are capable of transmitting a 2Mbps and have a radio range of 250 meters. We compare uncompressed IPv6/UDP/RTP, uncompressed IPv4/UDP/RTP and compressed IP/UDP/RTP using our compression techniques with a hybrid hop-by-hop/end-to-end approach. Note that no distinction is made between IPv6 and IPv4 for compressed headers because they can both be compressed to about 3 bytes. We include uncompressed IPv4/UDP/RTP in our analysis for the sake of completeness, but our main interest is compressing IPv6 header chains.

5.1 Packet Loss In Figure 7 packet loss is shown as a function of the number of simultaneous flows, for uncompressed IPv6, uncompressed IPv4 and compressed headers. The graph shows a significant difference in packet loss between uncompressed IPv6 and compressed headers, with uncompressed IPv4 somewhere in the middle. For example, the biggest difference is for 5 simultaneous flows, where uncompressed IPv6

headers result in 22% packet loss while compressed headers results in a 7% packet loss.

Figure 7: Packet loss as a function of simultaneous flows, for uncompressed IPv6, uncompressed IPv4,

and compressed.

5.2 Latency an Jitter Reduced latencies and jitter is yet another benefit of header compression in ad-hoc networks. Wireless links are not particularly fast so large headers could add significant delay. Jitter also increases because a more severe transition exists between congested and un-congested states in the presence of larger headers. Figure 9 and Figure 9 show the contrast in delay between IPv4 and compressed, and IPv6 and compressed, respectively. Each packet is time-stamped by the sender, and the time difference is recorded at the receiver. Each dot in the graphs represents a (t,d ) tuple where t is the time the packet was received and d represents the delay. The reduced delay and jitter for compressed ROHC headers is quite obvious in both graphs.

Figure 8: Packet delay. Red points represent IPv4

packets and green points represent compressed packets.

Page 7: [IEEE MILCOM 2005 - 2005 IEEE Military Communications Conference - Atlantic City, NJ, USA (17-20 Oct. 2005)] MILCOM 2005 - 2005 IEEE Military Communications Conference - Header Compression

Figure 9: Packet delay. Red points represent IPv6

packets and green points represent compressed packets.

6. CONCLUSIONS Several algorithms have been proposed in this paper to achieve efficient header compression in ad-hoc networks with very dynamic topologies and rapid node movement. A hybrid approach is used where hop-by-hop header compression is performed on IP headers while end-to-end header compression is performed on UDP/RTP headers. The context initialization algorithm leverages on routing messages to partially establish IP header compression contexts and reduce the overhead of context initialization. The framework is also able to compress routing messages without requiring context synchronization. Our simulation results show that a significant reduction in packets loss, therefore increasing the network capacity of ad-hoc networks carrying real time traffic.

7. REFERENCES [1] Sprint. “IP Monitoring Project”. Feb. 6, 2004.

http://ipmon.sprint.com/packstat/packetoverview.php

[2] CAIDA. “Packet Length Distributions”, 4 Aug 2004, http://www.caida.org/analysis/AIX/plen_hist

[3] Farber, D. J., Delp, G. S., and Conte, T. M. “A Thinwire Protocol for connecting personal

computers to the Internet”. Arpanet Working Group Requests for Comment, DDN Network Information Center, SRI International, Menlo Park, CA, Sept. 1984. RFC-914

[4] Jacobson, V., "Compressing TCP/IP Headers for Low-Speed Serial Links", RFC 1144, February 1990.

[5] Degermark, M., Nordgren, B. and S. Pink, "IP Header Compression", RFC 2507, February 1999.

[6] Casner, S. and V. Jacobson, "Compressing IP/UDP/RTP Headers for Low-Speed Serial Links", RFC 2508, February 1999.

[7] C. Bormann, C. Burmeister, M. Degermark, H. Fukushima, H. Hannu, L-E. Jonsson, R. Hakenberg, T. Koren, K. Le, Z. Liu, A. Martensson, A. Miyazaki, K. Svanbro, T. Wiebke, T. Yoshimura, H. Zheng. “RObust Header Compression (ROHC): Framework and four profiles: RTP, UDP, ESP, and uncompressed”. July 2001. (Format: TXT=368746 bytes) (Status: PROPOSED STANDARD)

[8] C. E. Perkins, "Ad-hoc on-demand distance vector routing", in MILCOM '97 panel on Ad-Hoc Networks, Nov. 1997.

[9] C. Perkins, E. Belding-Royer, S. Das, “Ad-hoc On-Demand Distance Vector (AODV) Routing”, RFC 3561, July 2003.

[10] D. Green, “Problem Statement for ROHC Compression Profile for Multipoint MANET Links”, Internet Draft

[11] Regis J. “Bud” Bates, “Broadband Telecommunications Handbook”. McGraw Hill