robust detection of unauthorized wireless access points

15
Mobile Netw Appl DOI 10.1007/s11036-008-0109-6 Robust Detection of Unauthorized Wireless Access Points Bo Yan · Guanling Chen · Jie Wang · Hongda Yin © Springer Science + Business Media, LLC 2008 Abstract Unauthorized 802.11 wireless access points (APs), or rogue APs, such as those brought into a cor- porate campus by employees, pose a security threat as they may be poorly managed or insufficiently secured. An attacker in the vicinity may easily get onto the inter- nal network through a rogue AP, bypassing all perime- ter security measures. Existing detection solutions do not work well for detecting rogue APs configured as routers that are protected by WEP, 802.11 i, or other security measures. In this paper, we describe a new rogue AP detection method to address this problem. Our solution uses a verifier on the internal wired net- work to send test traffic towards wireless edge, and uses wireless sniffers to identify rouge APs that relay the test packets. To quickly sweep all possible rogue APs, the verifier uses a greedy algorithm to schedule the channels for the sniffers to listen to. To work with the encrypted AP traffic, the sniffers use a probabilistic algorithm that only relies on observed wireless frame size. Using extensive experiments, we show that the proposed approach can robustly detect rogue APs with moderate network overhead. The results also show that our algorithm is resilient to congested wireless B. Yan · G. Chen (B ) · J. Wang · H. Yin Department of Computer Science, University of Massachusetts Lowell, Massachusetts, MA USA e-mail: [email protected] B. Yan e-mail: [email protected] J. Wang e-mail: [email protected] H. Yin e-mail: [email protected] channels and has low false positives/negatives in real- istic environments. Keywords wireless security · IEEE 802.11 · rogue AP · intrusion detection 1 Introduction An unauthorized 802.11 wireless access point, or rogue AP, plugged into a corporate network, poses a serious security threat to enterprise IT systems. Rogue APs are typically installed by employees in work places for con- venience and flexibility. Although users could leverage available security measures, such as Wired Equivalent Privacy (WEP) or 802.11 i, to protect their network communications, such measures may not be consis- tent with the corporate security policies and they may be inefficient. For example, researchers have identi- fied security flaws in both WEP [4] and 802.11 i [9]. In addition, enterprises often have advanced authen- tication/firewalling methods and traffic classification/ shaping/tracing requirements, which are unlikely con- sistent with the rogue APs managed by individ- uals. Thus rogue AP has been identified as one of the most critical wireless security vulnerabilities [17]. Several vendors (e.g., Aruba Networks, AirMagnet, and AirDefense) sell WLAN Intrusion Detection Sys- tem (WIDS) products that can detect various wireless security threats. These WIDS solutions typically consist of a set of wireless sniffers that scan airwaves for packet analysis. These sniffers can be overlaid with APs, mean- ing that they are strategically deployed as a separate infrastructure. These sniffers can also be integrated with APs, meaning that the APs themselves, in addition to

Upload: others

Post on 11-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Mobile Netw ApplDOI 10.1007/s11036-008-0109-6

Robust Detection of Unauthorized Wireless Access Points

Bo Yan · Guanling Chen · Jie Wang · Hongda Yin

© Springer Science + Business Media, LLC 2008

Abstract Unauthorized 802.11 wireless access points(APs), or rogue APs, such as those brought into a cor-porate campus by employees, pose a security threat asthey may be poorly managed or insufficiently secured.An attacker in the vicinity may easily get onto the inter-nal network through a rogue AP, bypassing all perime-ter security measures. Existing detection solutions donot work well for detecting rogue APs configured asrouters that are protected by WEP, 802.11 i, or othersecurity measures. In this paper, we describe a newrogue AP detection method to address this problem.Our solution uses a verifier on the internal wired net-work to send test traffic towards wireless edge, anduses wireless sniffers to identify rouge APs that relaythe test packets. To quickly sweep all possible rogueAPs, the verifier uses a greedy algorithm to schedulethe channels for the sniffers to listen to. To work withthe encrypted AP traffic, the sniffers use a probabilisticalgorithm that only relies on observed wireless framesize. Using extensive experiments, we show that theproposed approach can robustly detect rogue APs withmoderate network overhead. The results also showthat our algorithm is resilient to congested wireless

B. Yan · G. Chen (B) · J. Wang · H. YinDepartment of Computer Science,University of Massachusetts Lowell,Massachusetts, MA USAe-mail: [email protected]

B. Yane-mail: [email protected]

J. Wange-mail: [email protected]

H. Yine-mail: [email protected]

channels and has low false positives/negatives in real-istic environments.

Keywords wireless security · IEEE 802.11 · rogueAP · intrusion detection

1 Introduction

An unauthorized 802.11 wireless access point, or rogueAP, plugged into a corporate network, poses a serioussecurity threat to enterprise IT systems. Rogue APs aretypically installed by employees in work places for con-venience and flexibility. Although users could leverageavailable security measures, such as Wired EquivalentPrivacy (WEP) or 802.11 i, to protect their networkcommunications, such measures may not be consis-tent with the corporate security policies and they maybe inefficient. For example, researchers have identi-fied security flaws in both WEP [4] and 802.11 i [9].In addition, enterprises often have advanced authen-tication/firewalling methods and traffic classification/shaping/tracing requirements, which are unlikely con-sistent with the rogue APs managed by individ-uals. Thus rogue AP has been identified as one of themost critical wireless security vulnerabilities [17].

Several vendors (e.g., Aruba Networks, AirMagnet,and AirDefense) sell WLAN Intrusion Detection Sys-tem (WIDS) products that can detect various wirelesssecurity threats. These WIDS solutions typically consistof a set of wireless sniffers that scan airwaves for packetanalysis. These sniffers can be overlaid with APs, mean-ing that they are strategically deployed as a separateinfrastructure. These sniffers can also be integrated withAPs, meaning that the APs themselves, in addition to

Mobile Netw Appl

serving wireless clients, also perform IDS functionali-ties periodically. Researchers have recently proposedto turn existing desktop computers into wireless sniffersto further reduce deployment cost while still providinga reasonable coverage [1].

Detecting rogue APs using wireless sniffers re-quires that the sniffers listen to all WLAN channelsto detect the presence of APs either sequentially ornon-sequentially using various channel-surfing strate-gies [6]. If a detected AP is not on the authorized list, itis flagged as a suspect. The detected suspect, however,may well be a legitimate AP belong to a neighboringcoffee shop or a nearby household. The question thenbecomes how to automatically verify whether the sus-pect AP is actually on the enterprise wired network (seeFig. 1).

A rogue AP may be configured as a layer-2 switchingdevice or a layer-3 routing device, while the latter ismore common for off-the-shelf APs. While workingwell for detecting layer-2 rogue APs, existing commer-cial WIDS solutions are not effective to detect layer-3 APs that is protected by security measures, such asMAC address filtering, WEP, or 802.11 i [5].

In this paper, we propose a novel solution to accu-rately detect rogue APs, whether they are protectedor not. In particular, instead of sending test packetsfrom wireless side (see related work in Section 2),our solution has a verifier on the wired network thatsends test packets towards the wireless side. Should anywireless sniffer pick up these special packets, we haveeffectively verified that the suspect AP that relays thesepackets is indeed on the internal network and thus is arogue AP. The benefits of our approach include passive

AP1

SnifferRogue

AP

AP2

Network A

Network B

AP1 is a legitimate AP in network A. AP2 isa legitimate AP in network B. The snifferdetects that AP2 is not on the legitimateAP list of network A. How can we knowAP2 is not a rogue in network A?

Figure 1 Detecting rogue APs requires automatic differentiatinglegitimate APs on other networks from the rogue ones on ourown networks

wireless sniffing that does not rely on communicationbetween sniffers and the suspect APs, which are oftenproblematic. Our detection mechanism is also accuratesince it involves a proactive confirmation step, and canreturn the IP address of the rogue AP so the adminis-trator can easily locate the AP.

Our approach needs to address two issues for robustdetection. First, we note that a layer-3 rogue AP nor-mally comes with a network address translation (NAT)module so that multiple devices can share the sameconnection. It is impossible to send test packets directlyto the associated wireless clients, from the wired side,because they have private IP addresses. We note thatNAT rewrites outbound packets from associated clientswith its own address. Thus, we can use the verifier tomonitor the wired traffic and send test packets to theactive sources. If an active source is an AP, the testpackets will be forwarded by NAT and observed bywireless sniffers. Second, the sniffers may not be able torecognize test packets by examining the payload if theAP has enabled encryption. To solve this problem wedevise a probabilistic verification algorithm based on asequence of packets of specific sizes.

The contribution of this paper is a novel approachto detect protected rogue APs acting as layer-3 routers,which are difficult to detect using existing solutions. Wedesign an algorithm for the wired verifier and wirelesssniffers to cooperatively verify rogue APs. Using simu-lations and experiments, we show that the proposed ap-proach can effectively detect rogue APs in a relativelyshort time period with moderate network overhead.Once a rogue AP is confirmed, the verifier returns its IPaddress from which the switch port to that address canbe found and automatically blocked until the rogue isremoved. Our empirical evaluation also shows that thedetection algorithm is resilient to congested wirelesschannels.

In this paper we assume that the owner of a rogue APis not malicious: he simply sets up an unauthorized APfor convenience. The verifier and the wireless sniffersthemselves are guarded by typical security measures,such as access control mechanisms and intrusion de-tection systems, against external attackers. In reality, amalicious rogue AP owner with physical wired accesscan do much more damage than setting up a rogue AP.Coping such insider attacks is beyond the focus of thispaper.

The rest of the paper is organized as follows. Sec-tion 2 summarizes the related work. We present a net-work model of the problem statement in Section 3. InSections 4 and 5 we describe our monitoring and verifi-cation algorithms, respectively. We present evaluationresults in Section 6. We discuss potential limitations and

Mobile Netw Appl

scalability improvements of our method in Section 7and we conclude in Section 8.

2 Related work

Many WLAN security vendors provide some form ofrogue AP detection. A simple mechanism is to comparethe unknown APs, found by the sniffers, with a list oflegitimate APs using their MAC addresses. This may,however, lead to numerous false positives, classifyingneighboring APs as rogues on the internal network.

One effective approach to verify layer-2 rogue APs,configured as switches rather than routers, is to pollall internal enterprise network switches using SNMPto collect MAC addresses associated with each porton those switches. If a wireless sniffer observes any ofthese MAC addresses over the air, the associated APmust be on the wired network, given that the AP worksas a layer-2 bridge. For layer-3 rogue APs, a commonverification approach is to have a nearby sniffer activelyassociate with the suspect AP and ping a known hoston the internal network not accessible from outside. Ifsuccessful, the suspect AP is confirmed to be on theinternal network and is indeed a rogue.

This associate-and-ping approach, however, mayhave a sniffer associate with others’ APs and send test-ing packets through their internal networks. This wouldresult in a trespass of others’ networks, and so one couldface ethical and even legal charges. More importantly,this approach fails to verify protected APs that requirevalid MAC addresses or other authentication methodsfor successful association. A recent study confirmedthat existing solutions were indeed not adequate fordetecting protected APs that act as routers [5]. Ourproposed solution, on the other hand, can effectivelydetect layer-3 protected rogue APs.

RogueScanner by Network Chemistry takes a fin-gerprinting approach, in which the detector collectsvarious information from network devices and send itback to a centralized server for classification. This raisesboth privacy and security concerns on sending internalnetwork information to a third party. This approach hasto build a large database on device profiles and needsuser feedback on any device that is not in the database.Thus it needs to trust user-input data not to poison theirdatabase in a malicious or unintentional way.

DIAR proposes three more types of tests besidesactive association, leading to perhaps a most compre-hensive solution for rogue AP detection [1]. First, onecan use MAC address test that requires compiling a listof known MAC addresses on the corporate networks,but it only works for link-layer APs. Second, one can

run DHCP test to identify device OS types using signa-tures from DHCP requests, which only works for APsconfigured to use DHCP. Finally, the wireless snifferscan replay some captured packets and see whether anywired monitor can detect these packets. DIAR usesseveral heuristics to reduce false alarms, though thedetails and results are not presented in its publications.Their approach can bypass encryption problem butrequire running a wired monitor in each subnet. Onthe other hand, we aim to run only one verifier formonitoring (see Section 7).

If we know the precise location of a detected AP, wecan determine that it is a rogue if it is located inside theenterprise. Existing WLAN localization algorithms canachieve 3–5 m accuracy [2], but they typically requireextensive manual profiling to build RF maps and thusis not realistic to deploy on any large campus.

Wei et al. propose an interesting approach thatuses timing information of TCP-ACK pairs to classifywhether a source is wireless or wired (Ethernet) [20,21], which can be used to detect rogue APs. Their ideais that two consecutively sent packets, such as TCPACKs, will likely have longer time intervals betweenthem after they traverse through a 802.11 link thanthrough Ethernet. We note, however, that these pack-ets may be queued by a busy router, thus destroying thetiming interval information and resulting in inaccurateanalysis. In near future, as 802.11 n APs become morepopular and their bandwidth is comparable to 100 MbpsEthernet, making the time interval information lessuseful.

We performed experiments for quantitative demon-stration. We set up a Linksys WRT54G AP that con-nected a wired sender A that sent three continuousTCP packets to a wireless receiver B every second. OnA we measured the two intervals between the threecontinuous ACKs received from B. We connected an-

0

500

1000

1500

2000

2500

3000

0 20 40 60 80 100

Inte

rval

of T

CP

-AC

K (

µs)

Number of Injection per second

802.11b802.11g

Figure 2 Time intervals between TCP-ACK pairs (the bars areslightly shifted horizontally to avoid overlaps)

Mobile Netw Appl

other two wired clients C and D to the AP and gen-erated traffic between them to make the AP “busy.”Figure 2 shows the average time intervals of the ACKpairs. The x axis is the rate of packet transmissionsbetween C and D through the AP. The larger the rateis, the busier the AP is. The y axis is the averagetime intervals between ACKs and the bars represent25% and 75% percentiles. The time intervals of ACKpairs when using 802.11g decreased below the 600μsthreshold used in [21], thus this wireless AP may beclassified as a wired source. Also, the time intervals ofACK pairs using 802.11 g are much smaller than thoseusing slower 802.11 b. This suggests that as even fasterwireless technologies, such as 802.11 n, are deployed,timing-based detection methods may have further dif-ficulties to differentiate wired and wireless sources.On the other hand, our method employs a verificationprocedure to determine whether a source is actuallywireless or not by leveraging wireless sniffers and isagnostic to wireless traffic variations.

3 Network model

We assume that wireless sniffers are deployed to mon-itor the enterprise airspace. The sniffers employ somechannel-hopping strategies to detect the presence ofAPs. These APs may be using different communica-tion channels. The sniffers are connected to the wirednetwork and can communicate with an internal verifier.The sniffers update the verifier about the detected APsand their channels. The verifier may instruct certainsniffers to switch to a particular channel during theverification process. Figure 3 shows a simplified net-work where sniffer S1 covers multiple APs, and sniffersS1 and S2 have overlapping coverage. C1 and C2 arewireless stations.

Verifier

S2

S1

AP1 AP2

C2C1

Figure 3 A simplified network model with APs and sniffers

A rogue AP could be a layer-2 device or a layer-3device. While our approach works for both cases, wewill focus on layer-3 devices, which are most commonfor consumer APs. A layer-3 AP typically comes withNAT and each associated wireless clients is assignedwith a private IP address. When a client communicateswith a wired host with address Aout and port numberPout, NAT opens a special port on itself (Pnat) andrewrites the headers of outbound packets so they looklike as if they are coming from its own address Anat

and port Pnat. At this time, any packet from Aout andPout that are sent to Anat and Pnat will be forwardedby NAT to the wireless client, thus appearing on thewireless medium and observable by sniffers in range.

NAT ports are opened dynamically when clients ini-tiate communications with outside destinations. Thus,the verifier needs to monitor outbound traffic and sendtest packets to the sources Asrc and Psrc. The testpackets, however, need to have forged headers to looklike as if they are coming from Aout and Pout. It isfairly easy to use existing tools to send packets withcustomized headers. If some of the sniffers report re-ception of these packets, Asrc must be the IP address ofa rogue AP. Based on the network topology, switches’ARP tables, and possibly DHCP logs, it is feasible totrack down exactly which switch port the rogue AP isplugged into.

Note that the verifier essentially injects spoofedpackets into a normal communication path. We needto be careful what types of packets to inject withoutdisrupting normal communication. TCP packets havea sequence number (SN) header field to ensure reli-able data transfer. The receiver maintains a windowrcv_wnd and only accepts packets whose SN fallinginto that window. Thus, the verifier should inject testpackets with forged SN set to be outside that window,such as an SN the receiver has recently acknowledged.The receiver will then silently drop these test packets.On the other hand, UDP packets do not have transport-layer SN and all packets will be forwarded to the ap-plication layer. To avoid confusing applications usingUDP, we refrain from injecting UDP test packets andthe verifier only monitors TCP traffic in this paper. Inpractice, common UDP-based multimedia applicationsare fairly resilient to distorted content, thus our methodmay still be applicable.

4 Wired traffic monitoring

The verifier monitors wired traffic. Every observed hoston the internal network is potentially a wireless AP, un-less explicitly marked by administrators as wired, such

Mobile Netw Appl

as the IP addresses of the well-known network servers.All addresses allocated to user workstations, however,should be considered susceptible, since a user couldconfigure an AP to use the static IP address assignedto her workstation when DHCP is not available.

Verifying a host takes a certain amount of timefor sniffers to switch channels and analyze observedpackets (Section 5). If there is a burst of new sourcesobserved in a short period, these sources are queuedby the verifier and tested sequentially. The verifieralways picks the host with the oldest timestamp forverification. All hosts in the queue will be updated withnew timestamp if more traffic from them is observed.The NAT port, however, could expire for being idle fortoo long when the verifier tests a source that has beenwaiting in the queue for a while. The verifier will skiptesting any source that has not been updated for certainamount of time, say 5 min, to avoid sending traffic to aninvalid port number. These sources will be tested nexttime when their traffic is observed.

It is possible to further reduce network load if theverifier can tell, based on traffic patterns, whether theobserved hosts are likely connected to a wireless link.Wei et. al studied the timing patterns of the packetintervals over wireless and wired links [20, 21], whichare quite different since wireless links generally haveless bandwidth and 802.11 MAC requires a randombackoff between two consecutive packets [8].

We want to find an online algorithm for the verifierto quickly classify observed hosts, so that the verifiercan focus only on testing those classified as wirelesssources. If the classification is 100% accurate, then theverifier do not need to take further actions. All exist-ing timing-based classification algorithms, however, areprone to false positives and false negatives as we discussin Section 2. Inspired by Wei et al’s work [20], we choseto simply count the short packet intervals, i.e., lessthan 250μs, between TCP packets of both inbound andoutbound directions. The verifier classifies any source,whose ratio of inbound and outbound short intervalsexceeds a threshold, as a potential wireless host. Thereason of doing so is that the number of outbound shortintervals, observed after packets having gone through awireless link, is expected to be much smaller than theinbound number of short intervals, observed before thepackets reaching the AP. This simple method is fasterto compute, but may classify some wired hosts as wire-less sources. The returned list of suspect IP addressesis then further verified by the algorithm discussed inSection 5.

In summary, the verifier may test every observedinternal host or only test those hosts classified as wire-less sources. The first approach is straightforward to

implement. The classification can reduce test traffic, butit may also lead to inaccurate results. In both cases,a verified non-rogue address will not be tested forsome time, after which it becomes susceptible againand is subject for verification. The reason for periodictesting is because a rogue AP can be installed using apreviously-verified IP address at any time. Similarly, awireless source mistakenly classified as wired will alsobe tested again after previous verification is expired.We evaluate these two methods in Section 6.

5 Rogue AP verification

The verifier sends test packets to observed sources andcheck whether some wireless sniffers can hear thesepackets. But a rogue AP may encrypt traffic and sosniffers cannot rely on special signatures embeddedin the application-layer data. One may borrow ideasfrom covert channels, in which the verifier deliberatelymanipulate the timing between packets withoutinjecting any new packets. The packet intervals thuscarry unique information, sometimes called watermark,which can be identified by a passive sniffer [19]. Whilenot intrusive, this approach seems less appealing in awireless environment because 802.11 MAC contentiondelays could affect accuracy of this timing-basedanalysis. Also, it requires customizing routers on thedata path, which is a non-trivial task and could degraderouting performance.

Another approach is to send a sequence of packetswhose size follows some predefined pattern that is un-likely observed in normal traffic, such as “1 2 4 8 16.”The wireless sniffers, however, may not capture all thepackets due to transient losses. Other packets may alsomix into the sequence when the packets go throughshared queues and wireless media. These limitationsmake this approach less robust.

5.1 Packet size selection

We present a new solution. In particular, we use averifier on the wired network to send test packets withsizes not frequently seen on the suspect APs. Namely,the sniffers report empirical distribution of packet sizesobserved from APs to the verifier, which then selects anuncommon size for test packets. This simple approachhas an implicit assumption that sizes of the downstreampackets from APs are not uniformly distributed. Weanalyzed a network trace collected from a WLANmade available to attendees of a four-day academicconference (Sigcomm 2004) [15]. Figure 4a shows thePDF of one AP’s downstream data packet sizes (the

Mobile Netw Appl

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0 50 100 150 200 250 300

Per

cent

age

Frame Size (11:7E:5D:6B:A8:AD)(a) Results for an AP deployed at an academic conference

0 0.05

0.1 0.15

0.2 0.25

0.3 0.35

0.4 0.45

0.5

0 50 100 150 200 250 300

Per

cent

age

Frame Size (00:0B:86:81:38:B0)(b) Results for an AP deployed at a university campus

Figure 4 Size distribution of downstream data packets (a, b)

injected test packets will appear as 802.11 data packetson wireless). It is clear that most packet sizes appearinfrequently. Figure 4b shows the results for an APdeployed at a university campus, and again we cansee that almost half of downstream data packets havepacket size of 110 bytes.

In general, we want to choose a relatively smallpacket size so that the verifier demands less bandwidth.Also, a small packet will unlikely be fragmented byAPs and thus will not be missed by sniffers. Once asize is chosen, the verifier notifies the sniffers to onlywatch for packets with this particular size. To get aframe size N, the verifier should send out TCP packetswith application payload of N minus the size of all theprotocol headers captured by the sniffer. Note that thesniffer should also consider the overhead of encryptedpackets. For example, there are additional 12 bytesoverhead for WEP packets (3 bytes IV, 1 byte keynumber, and two 4-bytes ICVs).

Suppose that the two geographically close APs inFig. 3 belong to two companies, where S1 may overhearthe test packets from the other AP if the verificationprocesses occur simultaneously in these two networks.In this case, S1 may falsely conclude that another com-pany’s AP is a rogue. To mitigate this problem, theverifier chooses randomly a packet size from the Mleast observed packet sizes on the network, minimizingthe chance of colliding verification time and test packetsize.

5.2 Binary hypothesis testing

To avoid false positives caused by normal packets thathappen to have the same size of the test packets, theverifier sends more than one test packet to improvethe robustness of detection. The question is how manytest packets the verifier should send. Note that we may

not observe a back-to-back packet train of the samesize, because normal packets of different sizes may beinserted as the test packets going through the sharedwireless medium. The APs may also send other packets,such as beacons, in the middle of a test packet sequence.

We use Sequential Hypothesis Testing theory todetermine the number of test packets that can achievedesired detection accuracy [18]. Assume that the prob-ability to see data packets with the chosen test size sfrom a monitored AP is p, and the sniffers determineindependently whether the AP is relaying test packetsbased on observed downstream traffic. Intuitively, themore packets with the test size are observed, the morelikely a verification is in process and the relaying AP is arogue. For a given AP being monitored by some sniffer,let Xi be a random variable that represents the size ofith downstream data packet, where

Xi ={

0, if the packet size �= s1, if the packet size = s

We consider two hypotheses, H0 and H1, where H0

states that the monitored AP is not relaying test packets(thus not a rogue), and H1 states that the AP is relayingtest packets (thus a rogue). Assume that the randomvariables Xi|H j are independent and uniformly distrib-uted, conditional on the hypothesis H j. We express thedistribution of Xi as below:

Pr[Xi = 0|H0] = θ0, Pr[Xi = 1|H0] = 1 − θ0

Pr[Xi = 0|H1] = θ1, Pr[Xi = 1|H1] = 1 − θ1

We can specify the detection performance using thedetection probability, PD, and the false positive prob-ability, PF . In particular, we can choose desired valuesfor α and β so that

PF ≤ α and PD ≥ β (1)

where typical values might be α = 0.01 and β = 0.99.

Mobile Netw Appl

The goal of the verification algorithm is to determinewhich hypothesis is true while satisfying the perfor-mance condition (1). Following [18], as each packet isobserved we calculate the likelihood ratio as follows:

�(X) = Pr[X|H1]Pr[X|H0] =

n∏i=1

Pr[Xi|H1]Pr[Xi|H0] (2)

where X is the vector of events (packet size is s ornot) observed so far and Pr[X|Hi] represents the con-ditional probability mass function of the event streamX given that model H j is true. The likelihood ratio isthen compared to an upper threshold, η1, and a lowerthreshold, η0. If �(X) ≤ η0 then we accept hypothesisH0. If �(X) ≥ η1 then we accept hypothesis H1. Ifη0 < �(X) < η1 then we wait for the next observationand update �(X). These two thresholds can be upperand lower bounded by simple expressions of PF andPD (η1 = β

αand η0 = 1−β

1−α), from which we can compute

the expected number of observations needed before theverification algorithm accepts one hypothesis [11]:

E [N|H0] = αln β

α+ (1 − α)ln 1−β

1−α

θ0ln θ1θ0

+ (1 − θ0)ln 1−θ11−θ0

E [N|H1] = βln β

α+ (1 − β)ln 1−β

1−α

θ1ln θ1θ0

+ (1 − θ1)ln 1−θ11−θ0

(3)

Given a suspect AP, θ0 can be empirically calculatedby sniffers. The verifier injects a sequence of test pack-ets of the same size, producing a new probability θ1.Given desired performance conditions α and β (Eq. 1),we can establish how θ1 relates to the number of ob-servations needed (Eq. 3) for the verifier to accept H0

or H1 (deciding whether the AP is a rogue or not).Figure 5a shows that we want to choose packet size withsmall probability appearing on normal communications

(1 − θ0) to reduce the number of observations. It alsoshows a tradeoff for θ1, for the verifier needs to injectmore packets for smaller θ1 and thus yielding quickeralgorithm termination.

The number of test packets relates to the currentload of the monitored AP. If the rate of downstreamdata packets is measured as R, the verifier needsto inject R/2 test packets to bring θ1 to about 0.7.Fortunately, R is often small, particularly for rogueAPs, so the verifier does not have to send a largenumber of test packets. We computed the number (andthe total bytes) of downstream data packets of an APdeployed in our department over an 8.5-h period inthe afternoon. The load on that AP was fairly lightand most of the time the R is less than 10 packets persecond and the bandwidth usage is less than 8 Kbps(Fig. 5b).

5.3 Sniffer channel scheduling

Wireless sniffers scan through all possible channels todetect suspect APs that are not listed as legitimate APson the monitored network. A suspect AP, however,could either be a rogue on the monitored network ora legitimate AP on a nearby network.

After suspect APs are identified, the verifier sendstest packets to target IP addresses, identified usingmethods in Section 4, to see whether the suspect APsrelay these test packets. In particular, the verifier in-structs the sniffers in the proximity of the suspect APsto monitor all possible channels and report to the ver-ifier. If one of these sniffers on a particular channeldetects that test packets are being relayed from thesuspect AP to a wireless client, the verifier confirms

0

10

20

30

40

50

60

70

0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7

E[N

|H1]

theta_1

theta_0 = 0.85theta_0 = 0.90theta_0 = 0.95

(a) The expected number of observations to accept H1.

0.1

1

10

100

1000

10000

0 200 400 600 800 1000 1200 1400 1600

Pac

kets

/Byt

es p

er s

econ

d

Time sequence (10-second window)

Bytes (top line)Packets (bottom line)

(b) Number/Bytes of downstream data packets per second.

Figure 5 Hypothesis termination and size distribution of downstream data packets (a, b)

Mobile Netw Appl

that the suspect AP is indeed a rogue on the monitorednetwork.

A sniffer can listen to multiple APs if they are inits range, even if they operate on different channels.A sniffer, however, can only listen to one channel ata time, and so it has to switch to different channels tomonitor other APs in its range. For example, if there isonly one sniffer s1 in Fig. 3 and the two APs, operatingon different channels, are within the range of s1, thens1 can cover both APs sequentially, for it has to betuned to a different channel. Thus, for every sourceverification, the verifier needs to repeat the test packetswhen scheduling sniffers over different channels.

Assume that there are N suspect APs and M sniffers.We label a suspect AP by i and a sniffer by j, where1 ≤ i ≤ N and 1 ≤ j ≤ M. We call a suspect AP a target.Each target i transmits on a channel ci, which couldbe any integer between 1 and Cmax, the maximumnumber of channels available. For example, Cmax = 11for 802.11 b.

Note that wireless sniffers are typically assigned forcontinuous performance and security monitoring [1,16], which will be interrupted as we schedule them forrogue AP verification. Thus ideally we want to reducethe number of sniffers involved for verification purpose,but still to cover all suspect APs.

We use S j to denote the set of targets that can be cov-ered by sniffer j. We want to use the smallest number ofsniffers to cover all targets. This is equivalent to solvingthe minimum set cover problem, which is known tobe NP-hard (see, e.g. [7]). The following greedy algo-rithm finds in polynomial time an approximation to theproblem with an approximation guarantee of Hd (see,e.g. [10]), where Hd = ∑d

�=1 1/� is the d-th harmonicnumber and d is the cardinality of the largest set oftargets that can be covered by a sniffer.

Algorithm A1. Set C ← ∅; S1

j = S j, 1 ≤ j ≤ M; I ← {1, 2, . . . , N};� ← 0.

2. Set � ← � + 1. Select S j� such that |S �j� | =

max1≤ j≤M |S �j |.

3. Set C ← C ∪ { j�}, S �+1j ← S �

j − S �j� , and I ← I −

S �j� .

4. If I = ∅, stop and output C. Otherwise go to Step 1.

On the other hand, a sniffer may have to switch chan-nels to cover all nearby APs. We want to reduce thenumber of channels a sniffer has to switch to, since theverifier needs to repeat testing packets for every newchannel. This can be achieved by coordinate nearby

sniffers to reduce the maximum number of channelsany sniffer has to check.

Thus we need a new algorithm to minimize the num-ber of channel switching, in addition to minimizing thenumber of sniffers. Minimizing the number of sniffersand minimizing the number of channel switching, how-ever, are two competitive objectives. This is becauseusing less number of sniffers could increase the numberof channel switching, and using less number of channelswitching could in turn increase the number of sniffers.Thus, we consider the sum of the number of sniffers andthe number of channel switching and try to minimizethis summation. This problem contains minimum setcover as a special case (when there is only one fixedchannel available, e.g.) and so it is NP-hard. Similar toAlgorithm A, we present a linear-time approximationalgorithm for this problem.

We use an integer k to label channels, where 1 ≤k ≤ Cmax. Let S j k denote the set of targets operatedon channel k that fall in the range of sniffer j, where1 ≤ j ≤ M and 1 ≤ k ≤ Cmax. Note that S j k for some kmay be empty. Remove these S j k from consideration.For convenience, in what follows we assume that S j k isnot empty.

Let J denote the set of sniffers j and I the set oftargets i detected by some j at a certain channel k.

Algorithm A′

1. Set C ← ∅; S1j k = S j k; � ← 0.

2. Set � ← � + 1. Select S j�k�such that |S �

j �k�| =

max j∈J |S �j k|. Set s� ← j� and c� ← k�.

3. Set C ← C ∪ { j�}; S �+1j k ← S �

j k − S �j�k�

, and I ← I −S �

j �.

4. If I ← ∅, stop and output C. Otherwise go to Step1.

Let r be the value of � when Algorithm A′ stops.At the �-th round for 1 ≤ � ≤ r, the verifier schedulessniffer s� to listen to channel c�.

It is straightforward to see that Algorithm A′ runs inO(

∑j,k |S j k|) time. Let d = max{|S j k| | j ∈ J and 1 ≤

k ≤ Cmax}. Then Algorithm A′ produces an approxima-tion with an approximation ratio of Hd.

Remark If we want to give priority to minimizing thenumber sniffers, then in Step 2 of Algorithm A′, wecould select S j �k�

such that j� has been selected inprevious iteration if more than one pair ( j�, k�) thatsatisfies the condition in Step 2.

Mobile Netw Appl

6 Evaluation results

In this section we present experimental results of ourdetection method. We first evaluate wired traffic mon-itoring methods using extensive network traces. Wethen evaluate the sniffer channel scheduling algorithmusing two real-world AP databases. We quantify theimpact of congested channels on the performance ofpacket sniffing and show the accuracy and speed resultsfor our detection algorithm. Finally, we present detec-tion accuracy using empirical AP traces.

6.1 Wired traffic monitoring

The verifier monitors wired traffic and test internaladdresses. The premise of this approach is that theverification load could be amortized over time. Toevaluate this approach, we need extensive long-termnetwork traces. Unfortunately, we do not have suchkind of data from real enterprise networks. Instead, weuse traces collected from Dartmouth campus WLAN asa baseline evaluation. These traces only contain trafficfrom and to wireless hosts. We took two 10-day datasets collected in November 2003, one collected fromAPs in a library building and the other collected fromAPs in a residential hall. There are more than 1200unique IP addresses in each trace. We scan each tracechronically, where in 75% of the cases the time neededto observe a previously unseen address is greater than100 s.

We simulated the verifier to test every host (IPaddress) in these two traces. We used 30 s as a fairlyconservative value for the time needed to verify asource (in practice, a verification only needs a coupleof seconds when the maximum number of channelschecked by any sniffer is small). Thus, a total of 120hosts can be verified in an hour. When the verifier

0

2

4

6

8

10

12

14

16

18

0 50 100 150 200 250

Que

ue le

ngth

Time since trace start (hours)

LibraryResidential

Figure 6 Length of verification queue (campus WLAN)

was testing an address, all newly arrived hosts werequeued. Any newly arrived packet from a queued hostwould pull that host to the end of the queue and theverifier always picked the head of the queue when itstarted next verification. If the host to be verified hadnot generated any packets for 5 min, the verifier wouldsimply ignore it to avoid sending test packets to anexpired NAT port number.

Figure 6 shows how the length of verification queuechanged over 10 days. The length of both queues hadnever exceeded 20. The queues, however, did havesome hosts remaining untested, roughly 6 for the librarytrace and 10 for the residential trace. This means thatthere were some hosts that appeared early in the trace,expired in the queue before they could be verified, andwere never seen again (so no more traffic triggeringverification). Most likely this is an artifact caused bydevice mobility. For example, some guests may haveused wireless when visiting the library or the dorm, butthen left the campus and thus was not seen again in our10-day traces.

Figure 7 shows the distribution of the verificationdelay since the first time a request was observed froma host. In more than 98% of the cases a host could beverified within 5 min since its first appearance. A fewhosts were not verified after a couple of days, mostlikely when these mobile hosts visited the monitoredAPs again. We also found that in more than 95%and 99% of the cases a host could be verified within50 s since its last update for the library and residentialtraces, respectively. These results suggest that the veri-fier worked effectively for a moderate-size network.

While the Dartmouth trace is a long-term trace, itonly consists of wireless hosts and does not representa typical enterprise where many hosts are wired. Sowe took a one-day enterprise network trace presentedin [13], and merged the data sets collected from oddly

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.0001 0.001 0.01 0.1 1 10 100 1000

CD

F

Time since first request (hours)

LibraryResidential

Figure 7 Verification delay since first request (campus WLAN)

Mobile Netw Appl

0

100

200

300

400

500

600

0 2 4 6 8 10 12

Que

ue le

ngth

Time since trace start (hours)

Odd PortsEven Ports

Figure 8 Length of verification queue (enterprise LAN)

and evenly numbered router ports to produce two time-continuous traces. We then considered each trace em-ulating a verifier that monitors a port for 1 h, moveson to the next port for another hour, and so on. Wefound that in about 46% or 66% of the cases the timeintervals needed to observe a new host are less than10 s. We expect that the intervals would be larger if wehad longer-time traces from the same router ports.

Figure 8 shows the change of the verification queuelength. As expected, the queue length jumped and thengradually decreased as the verifier moved to a new port.This pattern is quite visible for the even-port trace. Atthe 6th hour of the odd-port trace, the monitor startedon an active subnet and 539 hosts were observed duringthat hour. Many of these hosts could not be verified intime during this period and they stayed in the queue.We observed a similar pattern at the 10th hour foranother busy subnet. We note that once the monitorcycles through these ports, the queued hosts will beverified once the monitor visits previous ports again.

Figure 9 shows the distribution of the verificationdelay since the first time a request was observed from

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.0001 0.001 0.01 0.1 1 10

CD

F

Time since first request (hours)

Odd PortsEven Ports

Figure 9 Verification delay since first request (enterprise LAN)

a host. For those hosts that were verified, 77% of themwere verified within 20 min since their first appearancefor both traces. Also, 49% of them were verified within100 s since their last update. For all these tests, theverifier could achieve even faster speed if it could test asource quicker than 30 s.

Note that the verifier could reduce its workloadby only testing likely wireless sources (Section 4). Toevaluate this approach, we set up a Linksys AP (modelWRT 54G), configured to use 802.11 b, which wasplugged into our department network mimicking arogue AP. One of the author’s laptop used that APas the main network connection whenever the authorwas in his office. The Web browser on that laptop wasconfigured to use a Web proxy running tcpdump tocollect the HTTP traffic. Thus, all the Web transactionsfrom that laptop went through the AP and the Webproxy, recorded in the tcpdump trace. Then we ranthe classification algorithm through this trace to seewhether we could classify the IP address of the APas wireless. While we only had one trace, we startedclassification at different time in the trace. We gottotal 667 classification over a 14-day trace with 30-minseparation. Our classifier achieved 100% accuracy andFig. 10 shows the distribution of how long the classifiertook to make a decision since it saw the first request. Inabout 93% of the cases the classifier could conclude inless than 100 s.

Another question is whether the classifier can cor-rectly identify the wired sources to avoid testing themfurther. We ran the classification over the previousone-day enterprise network traces, where 568 uniqueIP addresses were classified as wireless and 227 wereclassified as wired. The median time to classify a wire-less and a wired source is 86 and 476 s, respectively.While we do not have the ground truth to tell theaccuracy of the classification over this trace, it is likely,

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 10 100 1000 10000

CD

F

Classification time (seconds)

Figure 10 Classification time (HTTP proxy)

Mobile Netw Appl

assuming most of the enterprise hosts are wired, thatthe algorithm had correctly classified about 30% ofwired hosts and thus could reduce testing traffic signifi-cantly. It is feasible to further reduce the testing load byusing more sophisticated statistical testing algorithms,which require training data sets to fine tune detectionthresholds [20, 21].

6.2 Sniffer channel scheduling

Here we evaluate the sniffer channel scheduling algo-rithm. Ideally we want to use fewest sniffers to coverall the suspect APs. A sniffer can only tune into dif-ferent channels sequentially, so we also want to haveminimum number of channel tuning to speed up theverification.

We used two empirical AP databases to evaluatescheduling efficiency. One database contains the coor-dinates of APs identified by wardriving Seattle down-town area conducted by Placelab.1 We took 200 APsin a 500 × 500 m2 area, though there are no channelnumbers listed for these APs in the database. We as-sumed that they could either be a 802.11 b/g or 802.11a AP and we randomly assigned each AP a channelfrom 24 available channels (11 in 802.11 b/g and 13 in802.11 a). The other database contains both coordinatesand channel number of APs deployed on Dartmouthcampus and we again took 400 APs located in a 500 ×500 m2 area. Dartmouth APs use both 802.11 b/g and802.11 a and only occupy 12 orthogonal 2.4 GHz/5 GHzchannels in total.

We set the sniffer’s monitoring range to be 100 m,and virtually deployed a random number of sniffers(from 1 to the number of APs). The scheduling algo-rithm was run over 10,000 topologies (T, S), where T isthe set of APs that have at least one sniffer covering it,and S is the set of sniffers that cover at least one AP.Note that T can be smaller than the number of APs ifsome APs are not covered by any sniffers for a giventopology.

Figure 11 shows the ratio of the number of scheduledsniffers to the number of total sniffers (S). The x axisis S/T, the density of sniffers, which has a bucket sizeof 0.1. For example, the y of x = 0.5 is the averageof scheduled sniffer ratios for all x in (0.4, 0.5]. Max-Chan is the maximum number of channels used bythe APs (Seattle wardriving APs used all 24 availablechannels while Dartmouth managed APs used 12). Notsurprisingly, the results show that the ratio of neededsniffers decreases as the sniffer density increases (more

1http://www.placelab.org/database/.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Sch

edul

ed s

niffe

r ra

tio

Sniffer/AP ratio

Seattle,MaxChan:24Dartmouth,MaxChan:12

Figure 11 Number of scheduled sniffers

sniffers are collaborating for AP coverage). If there aremore channels an AP can choose from, more sniffersare needed to cover them. In the context of enterpriseWLAN deployment, usually there are more sniffersthan suspect APs, which commonly are configured to 3non-overlapping channels in 2.4 GHz. This means onlya small number of sniffers are needed to cover theseAPs.

Figure 12 shows the maximum number of channelsto be checked across all sniffers. The x axis is the ratioof the scheduled sniffers to total sniffers S, which hasa bucket size of 0.1. The y axis is the average of allmaximum number of channels to check for all (x −0.1, x]. The larger the maximum number of channels,the longer it takes to finish each verification for onesource. The results clearly show the tradeoff betweenthe number of needed sniffers and the maximum num-ber of channels. If we can coordinate more sniffers, theverification process runs faster. Note that we normallydo not want to use all sniffers, which run continuoustraffic monitoring tasks [1, 16]. It is thus the network

0

2

4

6

8

10

12

14

16

0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

Max

imum

num

ber

of c

hann

els

Scheduled sniffer ratio

Seattle,MaxChan:24Dartmouth,MaxChan:12

Figure 12 Maximum number of channels to check

Mobile Netw Appl

operator’s decision on how many sniffers can be inter-rupted for verification.

6.3 Impact of congested channels

Wireless sniffers are typically embedded devices withlimited resources. They usually have 200–300 MHzCPU and 16–32 MB RAM, running tailored Linux,such as OpenWRT, and supporting open-source wire-less drivers, such as MadWifi. We note that the packetsniffing program, running in Linux’s user space, oftencannot capture all frames in the air because of longdistance to the transmitter, multi-path collisions, andmore importantly, the congested channels. The sniffingsoftware uses PCAP library for frame capture, andthe PCAP runs in Linux kernel and maintains its ownbuffer for captured frames from a radio interface. If thechannel is congested with a large number of wirelessframes, the PCAP will drop incoming frames fromthe radio, as its buffer becomes full and the sniffingprogram falls behind on processing the frames in thebuffer.

To quantify the effect of the congested channel onframe capture, we set up an Asus AP and a “jammer”who constantly injects wireless frames into the air usingthe file2air utility. Note that the jammer still follows802.11 MAC protocol. A wired workstation (WS) pe-riodically sent UDP packets to a wireless station (STA)associated with our AP. We placed two sniffers, oneis about 3 ft to the AP and the other is about 30 ft tothe AP. On each sniffer, we recorded how many framescaptured by the radio, using an open-source MadWifiutility, and how many frames captured by our sniffingprogram. Thus we can calculate the frame-loss rate bythe sniffer, averaged over 20 tests. Figure 13 showsthat the sniffing program lost more than 70% and 80%frames captured by the radio on the two sniffers, as thejammer injects 200 frames per second into the air. The

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0 100 200 300 400 500 600 700 800 900 1000

Fra

me

loss

rat

e

Frame injection rate

Closer SnifferFarther Sniffer

Figure 13 Wireless frame loss rate at sniffers

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0 100 200 300 400 500 600 700 800 900 1000

UD

P lo

ss r

ate

Frame injection rate

Closer SnifferFarther SnifferSTA Receiver

Figure 14 Test UDP packet loss rate at sniffers and the receivingstation

loss rate clearly increases as the channel becomes morecongested.

We also calculated the loss rate of UDP packets sentby the WS. On the sniffers, we can decode wirelessframes, which are any frames in the air, to determinewhether they are UDP packets sent by the WS. On theSTA, it is straightforward to compute the number ofreceived UDP packets since it is the intended receiverof the WS. Figure 14 shows that the UDP loss rate wassignificant on sniffers, which is not surprising given thepoor capturing capability of the sniffing program oncongested channels. On the other hand, the UDP lossrate on intended receiver STA was significantly lower.The difference here is that the STA’s radio, not actingin the sniffing mode, will filter out all frames that arenot destinated to itself. This hardware-based filtering isefficient and allows the OS and the UDP receiver tohave little workload compared with the sniffers, whoseradio has to pass all frames to the OS. These resultsconfirm that the frame loss is caused by the sniffers,rather than the wireless channel.

Previous experiments show that sniffers can losemany frames on congested channels. This implies thatthe test packets sent by the rogue AP verifier may notbe received appropriately by the sniffers. The questionis then how well the proposed detection algorithm stillworks on a congested channel.

We performed additional experiments. On the WS,we ran a workload generator that sent 20 UDP packetsper second to the associated STA. These UDP packetswere generated in such a way that the probability ofa UDP packet having a size of 20 bytes is 5%. Thenour verifier periodically sends 20-byte test packets withrate R packets per second (pps), under different con-gestion level controlled by the jamming device. Forevery R and a congestion level, we repeated the testfor 10 times and we calculated the average number

Mobile Netw Appl

of frames needed to be observed before our detectionalgorithm can make a decision (Section 5.2). Note thatthese frames are not arbitrary wireless frames capturedfrom the air; they are downstream data frames from aparticular AP we are verifying.

The results show that the proposed verification al-gorithm reliably decided that the relaying AP was arogue for all tests we performed, though it neededdifferent number of frames under different networksetup. Figure 15 shows that in general the sniffer needsto observe more frames to conclude when the channelwas more congested, or when less test packets were sentby the verifier. It captured about 350 frames, a mixtureof workload and test packets, before it reported thatthe AP was a rogue under congestion of 200 injectedframes per second (note that this will cause about 65%UDP packet loss at sniffers as shown in Fig. 14). Thefaster the verifier can inject test packets (larger R),the quicker the detection algorithms can terminate withpositive results. Here all reports were true positivesand we study false positives and false negatives in nextsubsection.

These experiments show that our detection algo-rithm is resilient even if the channel is heavily con-gested, causing the sniffers to lose packets. On the otherhand, there are other reasons that a sniffer may lose testpackets sent by the verifier. For example, an attackercan launch various denial of service attacks [3] or abuse802.11 MAC protocol for cheating attacks [14]. Theconsideration of these attacks is out of the scope ofthis paper, and we assume a wireless intrusion detectionsystem, such as MAP [16], is deployed for this purpose.

6.4 Accuracy of rogue AP verification

In order to evaluate the accuracy of rogue AP veri-fication, we collected a 7-day trace from a residential

0

50

100

150

200

250

300

350

400

0 100 200 300 400 500 600 700 800 900 1000

Fra

mes

nee

ded

for

dete

ctio

n

Frame injection rate

R=1 ppsR=2 ppsR=5 pps

R=10 ppsR=20 pps

Figure 15 Data frames needed before the detection algorithmcan make a decision

AP, regularly used by 4 wireless devices. We capturedfirst 300 bytes of every packet, resulting in a 3.5 GBtrace file. First we studied detection false positives,which are possible when we inject test packets withsize s and there are also s-size packets appeared innormal AP traffic (not injected) during verification.Thus the sniffer is misled to report that the target APis relaying testing packets, resulting in false positives.We conducted 200 tests over this trace, in each test wefirst observed 5 min AP traffic and then pick an unseenpacket size in that 5 min to be the size of test packets.The packet size was randomly picked to be less thana maximum size (a controllable parameter). Here weassume that this AP is not a rogue and will not relaytest packets, hence any rogue reports are false positives.The verifier started after the 5-min period and wouldstop if detection algorithm terminated with rogue ornot reports or if timed out (threshold t), concluding thatthe AP was not a rogue if there was no report from thesniffer after some time test packets were sent.

Table 1 shows the accuracy of rogue AP verificationfor certain test packet size, randomly picked from amaximum size. We chose 10, 60, and 300 s as thetimeout thresholds, and in most cases the verifier cor-rectly classified the AP as a non-rogue. Note that thesethresholds are quite conservative, and the longer thethreshold the more likely test packet size will appear innormal AP traffic (higher false positives). In practice,1-s timeout is typically good enough since enterprisenetwork is usually sufficiently provisioned and longdelays are rare. Here the false positive rate (1 minus de-tection rate) was low, though it increased as test packetsize and timeout threshold increased. This suggests thatsmaller test packet size is desirable for better accuracyand less network overhead.

Next we studied false negatives when a rogue AP isreported as a non-rogue, which may happen as follows.A sniffer receives instruction from the verifier to startmonitoring test packets in the air. Before the test pack-ets from the verifier are relayed by the AP, however,enough number of non-test packets from that AP hasled the sniffer to conclude the verification reportingthat the AP is not a rogue. Thus the longer for the testpackets to reach the AP, the more likely the sniffer willreport the AP to be a non-rogue (false negatives).

Table 1 Accuracy of rogue AP verification

Maximum size (bytes)

Timeout threshold 200 400 500 1000 1500

10 s 100% 100% 100% 100% 100%60 s 100% 100% 100% 99.5% 99%300 s 99.5% 99% 99.5% 99% 99%

Mobile Netw Appl

0

0.2

0.4

0.6

0.8

1

0.01 0.1 1 10

CD

F

Verification Time (s)

Rogue AP Detection

Figure 16 Time needed for binary hypothesis testing

Figure 16 shows the CDF of the needed time forthe binary hypothesis testing algorithm to terminatereporting non-rogue using the residential trace. Wefound that 10% of tests finished within 100 ms, whichmeans 10% false negatives if it takes test packets 100 msto reach target AP. However, almost 98.5% of testsfinished after 50 ms, which means 1.5% false negativesif test packets arrive within 50 ms. The minimum finishtime of all tests was 20 ms. Since the enterprise networkdelays are typically in the order of several millisecondsor less and the traffic on a rogue AP tends to be light, wedo not expect high false negatives in real deployments.If a rogue AP was indeed classified as a non-rogue (falsenegatives), its verification would expire after certaintime and would be tested again next time (Section 4).

7 Discussions

Our research is conducted within the context of projectMAP [12], in collaboration with Dartmouth Collegeand Aruba Networks. MAP aims to build a scal-able measurement infrastructure using wireless sniffers,based on which various online analysis algorithms aredesigned to detect WLAN security threats. Our detec-tor will be integrated with MAP as an independentdetector. Within MAP, we envision a building-widesystem deployment with 20–50 wireless sniffers and 1rogue AP verifier. Assume that there are 1,000 activehosts in the building to be verified using 30 100-bytetest packets for each host, the imposed network load isabout 3 MB over a relatively long period. We do notconsider this load as a bottleneck, given that many or-ganizations already run a scanner to periodically checkvarious properties of the internal hosts as part of theirsecurity operations.

The verifier, on the other hand, may have to handlea large amount of traffic generated inside the building.While the processing is fairly lightweight (only check-ing transport-layer header), there are ways to furtherreduce the overhead. For example, many routers canbe configured to export NetFlow (or sFlow) recordsmarking the endpoints of the active flows. The verifiercan greatly increase its scalability by using this informa-tion instead of parsing the whole packet streams. Thisapproach will also reduce some privacy concerns sinceonly IP addresses and port numbers are exposed to theverifier. The verifier, in this case, need to randomlyselect a TCP sequence number since NetFlow doesnot provide this information. The probability of SNcollision, however, is relatively low given the large SNrange. We plan to investigate this tradeoff as a futurework.

Our approach has a potential limitation on detectingthe rogue APs configured as VPN endpoints, which willdrop the verifier’s forged test packets since they donot have valid authentication headers, thus the wirelesssniffers cannot see the test packets. Currently our solu-tion is to have the verifier to mark the IP addresses thatonly have one communication peer (VPN tunnelingeffect) and alert administrator for further checkup.

The rogue AP’s owner may try to block verificationtraffic to avoid detection. Note that the verificationtraffics source headers are forged so it looks like froma current communicating peer rather from the verifier,so the rogue AP cannot drop traffic based on its source.On the other hand, a rogue AP owner who is familiarwith our algorithm may still be able to block verificationtraffic on his AP by following TCP behaviors and drop-ping out-of-sequence TCP packets. However, he mayrisk dropping real application data if verifier choosessequence numbers very close to last transmitted packet.

Our solution can be easily combined with otherrogue AP detection methods. For example, a wirelesssniffer may first try to associate with an open AP andping the internal verifier. Our method only needs tobe activated when the association fails or the AP re-quires authentication, to further reduce the verificationoverhead.

8 Conclusion

We propose a new method where a wired verifier co-ordinates with wireless sniffers to reliably detect unau-thorized rogue APs. The verifier sends test traffic frominternal network to the wireless edge and reports theaddress of confirmed rogue AP for automatic blocking.

Mobile Netw Appl

With trace-based simulations, we show that the veri-fier’s workload can be amortized over time when mon-itoring a large number of active hosts. Using sequentialhypothesis testing theory, the verifier sends a sequenceof test packets with a specific size so the sniffers canverify the rogue AP that may have encrypted its traffic.In practice, our greedy sniffer channel scheduling algo-rithm can quickly allocate sniffers to cover all suspectAPs. Our empirical results also shows that the detectionalgorithm is resilient on heavily congested channels andis accurate on empirical AP traces.

Acknowledgements This work is supported in part by NSFunder Award CCF-0429906 and by the Science and Technol-ogy Directorate of the U.S. Department of Homeland Securityunder Award NBCH2050002. Points of view in this documentare those of the authors and do not necessarily represent theofficial position of NSF or the U.S. Department of HomelandSecurity. We thank MAP project team at Dartmouth Collegeand Aruba Networks for the constructive discussions on theproposed detection method. David Martin also provided valuablecomments on an early draft of this paper. We also thank theDartmouth CRAWDAD team, particularly Jihwang Yeo, andthe ICSI/LBNL group who made efforts to release the networktraces used in our experiments.

References

1. Bahl P, Chandra R, Padhye J, Ravindranath L, Singh M,Wolman A, Zill B (2006) Enhancing the security of corporateWi-Fi networks using DAIR. In: Proceedings of the fourthinternational conference on mobile systems, applications, andservices, Uppsala, June 2006

2. Bahl P, Padmanabhan VN (2000) RADAR: an in-buildingRF-based user location and tracking system. In: Proceedingsof the 19th annual joint conference of the IEEE computerand communications societies, Tel Aviv, March 2000

3. Bellardo J, Savage S (2003) 802.11 Denial-of-service attacks:real vulnerabilities and practical solutions. In: Proceedingsof the 12th USENIX security symposium, Washington, DC,August 2003, pp 15–28

4. Bittau A, Handley M, Lackey J (2006) The final nail in WEP’scoffin. In: Proceedings of the 2006 IEEE symposium on secu-rity and privacy, Oakland, May 2006

5. Bulk F (2006) Safe inside a bubble. June. www.networkcomputing.com

6. Deshpande U, Henderson T, Kotz D (2006) Channel sam-pling strategies for monitoring wireless networks. In: Pro-ceedings of the second workshop on wireless networkmeasurements, Boston, April 2006

7. Garey MR, Johnson DS (1979) Computers and intractabil-ity: a guide to the theory of NP-completeness. Freeman,Nashville

8. Garg S, Kappes M, Krishnakumar AS (2002) On the effect ofcontention-window sizes in IEEE 802.11 b networks. Techni-cal report ALR-2002-024, Avaya Labs Research

9. He C, Mitchell JC (2005) Security analysis and improvementsfor IEEE 802.11 i. In: Proceedings of the 12th network anddistributed system security symposium, San Diego, February2005

10. Hochbaum D (1997) Approximating covering and packingproblems: set cover, vertex cover, independent set, and re-lated problems. In: Hochbaum D (ed) Approximation algo-rithms for NP-hard problems. PWS, Boston

11. Jung J, Paxson V, Berger AW, Balakrishnan H (2004) Fastportscan detection using sequential hypothesis testing. In:Proceedings of the 2004 IEEE symposium on security andprivacy, Berkeley, May 2004, pp 211–225

12. MAP (2006) Security through measurement for wirelessLANs. Dartmouth College, July. http://www.cs.dartmouth.edu/∼map/

13. Pang R, Tierney B (2005) A Ffrst look at modern enterprisetraffic. In: Proceedings of the fifth ACM internet measure-ment conference, Berkeley, October 2005, pp 15–28

14. Raya M, Hubaux J-P, Aad I (2004) DOMINO: a system todetect greedy behavior in IEEE 802.11 hotspots. In: Pro-ceedings of the second international conference on mo-bile systems, applications, and services, Boston, June 2004,pp 84–97

15. Rodrig M, Reis C, Mahajan R, Wetherall D, Zahorjan J(2005) Measurement-based characterization of 802.11 in ahotspot setting. In: Proceeding of the ACM SIGCOMMworkshop on experimental approaches to wireless networkdesign and analysis, Philadelphia, August 2005, pp 5–10

16. Sheng Y, Chen G, Tan K, Deshpande U, Vance B, Yin H,Henderson T, Kotz D, Campbell A, Wright J (2008) MAP:a scalable monitoring system for dependable 802.11 wirelessnetworks. IEEE Wirel Commun, October 2008, pp 10–18

17. Mobile Antivirus Researcher’s Association (2006) The tenmost critical wireless and mobile security vulnerabilities. Mo-bile Antivirus Researcher’s Association, June

18. Wald A (1947) Sequential analysis. Wiley, New York19. Wang X, Reeves DS (2003) Robust correlation of encrypted

attack traffic through stepping stones by manipulation of in-terpacket delays. In: Proceedings of the 10th ACM confer-ence on computer and communications security, Washington,DC, October 2003, pp 20–29

20. Wei W, Jaiswal S, Kurose J, Towsley D (2006) Identify-ing 802.11 traffic from passive measurements using iterativebayesian inference. In: Proceedings of the 25th annual jointconference of the IEEE computer and communications soci-eties, Barcelona, April 2006

21. Wei W, Suh K, Wang B, Gu Y, Kurose J, Towsley D (2007)Passive online rogue access point detection using sequen-tial hypothesis testing with TCP ACK-Pairs. In: Proceedingsof the seventh ACM internet measurement conference, SanDiego, October 2007