rapid prototyping of mobile input devices using wireless ... · pdf filerapid prototyping of...

9
Rapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James Carlson, Richard Han, Shandong Lao, Chaitanya Narayan, Sagar Sanghani University of Colorado at Boulder Boulder, Colorado, 80309-0530 {james.carlson, richard.han}@colorado.edu Abstract Many options exist for prototyping software-based human-computer interfaces, but there are few compa- rable technologies that allows application developers to create ad-hoc hardware interfaces with correspond- ing ease and flexibility. In this paper, we present a system for rapidly constructing low cost prototypes of mobile in- put devices by leveraging wireless sensor nodes. We demonstrate two proof-of-concept applications–a conduc- tor’s baton and a scene navigation controller–that are prototyped using wireless sensor networks. These appli- cations illustrate that wireless sensor technology can be used to quickly and inexpensively prototype diverse phys- ical user interfaces. We believe that this technology will be of value in many areas, including the study of er- gonomics, haptic interfaces, collaborative design, low-cost VR systems, and usability research. 1. Introduction One of the most challenging and pervasive problems in the field of immersive visualization (more commonly known as virtual reality, or VR) is determining which type of input device is most appropriate for an application. Un- like desktop PC environments, where industry standards in user interface design have lead to the dominance of the keyboard and mouse as primary input devices, such stan- dards have yet to be established for immersive 3D envi- ronments [14] [13]. In order to facilitate the exploration of novel controllers, the developers of VR software should be able to create working prototypes of input devices concur- rently with software application development, rather than designing applications around the limitations of existing in- put devices. Unfortunately, building such prototypes is a costly and time-consuming venture, requiring experience in both electrical engineering and software engineering, as well as domain-specific expertise in immersive visualiza- tion. Our goal is to provide an enabling technology that helps bridge the gap between hardware interface design and soft- ware interface design. We demonstrate how wireless sensor nodes can be used as an abstraction layer, allowing for a sys- tem by which novel wireless input devices can be rapidly prototyped, at very low cost, by anyone who has an ele- mentary understanding of electronics. By focusing on ap- plications in immersive visualization, this work points to new considerations for the design of wireless sensor net- work technology that are relatively unexplored by similar efforts in ubiquitous computing and embedded interface re- search. 2. Design requirements The testbed for this research was the immersive visual- ization environment (IVE) at the BP Center for Visualiza- tion at CU, Boulder. Unlike VR systems that rely on a head- mounted display, the IVE is a form of VR display in which the user is surrounded by three 10 x 12 foot walls and a 12 x 12 foot floor onto which a stereoscopic image is pro- jected (Figure 1). A position and orientation tracker is at- tached to the user’s stereoscopic glasses, allowing the visu- alization software to recalculate the perspective of the 3D graphics based on the user’s proximity to the walls. The main advantage of an IVE is that it facilitates collabora- tive work: multiple people can use the facility simultane- ously in order to view the same data. This usage scenario, coupled with the fact that the primary user is mobile, moti- vates the strong preference for wireless input devices. Our decision to focus on applications in immersive envi- ronments poses some other unique challenges. We are con- centrating on providing real-time (or near real-time) inter- action, and therefore system latency is a primary concern. As a general guideline, a delay of less than 150ms between manipulation of the input device and the resultant change in the software interface is acceptable [9], but there are appli- cations for which a finer resolution (or more accurate timing of the user’s input) is required. The physical characteristics of the IVE require omnidirectional wireless communication Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Upload: dodan

Post on 06-Mar-2018

220 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

Rapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes

James Carlson, Richard Han, Shandong Lao, Chaitanya Narayan, Sagar SanghaniUniversity of Colorado at BoulderBoulder, Colorado, 80309-0530

{james.carlson, richard.han}@colorado.edu

Abstract

Many options exist for prototyping software-basedhuman-computer interfaces, but there are few compa-rable technologies that allows application developersto create ad-hoc hardware interfaces with correspond-ing ease and flexibility. In this paper, we present a systemfor rapidly constructing low cost prototypes of mobile in-put devices by leveraging wireless sensor nodes. Wedemonstrate two proof-of-concept applications–a conduc-tor’s baton and a scene navigation controller–that areprototyped using wireless sensor networks. These appli-cations illustrate that wireless sensor technology can beused to quickly and inexpensively prototype diverse phys-ical user interfaces. We believe that this technology willbe of value in many areas, including the study of er-gonomics, haptic interfaces, collaborative design, low-costVR systems, and usability research.

1. Introduction

One of the most challenging and pervasive problemsin the field of immersive visualization (more commonlyknown as virtual reality, or VR) is determining which typeof input device is most appropriate for an application. Un-like desktop PC environments, where industry standards inuser interface design have lead to the dominance of thekeyboard and mouse as primary input devices, such stan-dards have yet to be established for immersive 3D envi-ronments [14] [13]. In order to facilitate the exploration ofnovel controllers, the developers of VR software should beable to create working prototypes of input devices concur-rently with software application development, rather thandesigning applications around the limitations of existing in-put devices. Unfortunately, building such prototypes is acostly and time-consuming venture, requiring experiencein both electrical engineering and software engineering, aswell as domain-specific expertise in immersive visualiza-tion.

Our goal is to provide an enabling technology that helpsbridge the gap between hardware interface design and soft-ware interface design. We demonstrate how wireless sensornodes can be used as an abstraction layer, allowing for a sys-tem by which novel wireless input devices can be rapidlyprototyped, at very low cost, by anyone who has an ele-mentary understanding of electronics. By focusing on ap-plications in immersive visualization, this work points tonew considerations for the design of wireless sensor net-work technology that are relatively unexplored by similarefforts in ubiquitous computing and embedded interface re-search.

2. Design requirements

The testbed for this research was the immersive visual-ization environment (IVE) at the BP Center for Visualiza-tion at CU, Boulder. Unlike VR systems that rely on a head-mounted display, the IVE is a form of VR display in whichthe user is surrounded by three 10 x 12 foot walls and a12 x 12 foot floor onto which a stereoscopic image is pro-jected (Figure 1). A position and orientation tracker is at-tached to the user’s stereoscopic glasses, allowing the visu-alization software to recalculate the perspective of the 3Dgraphics based on the user’s proximity to the walls. Themain advantage of an IVE is that it facilitates collabora-tive work: multiple people can use the facility simultane-ously in order to view the same data. This usage scenario,coupled with the fact that the primary user is mobile, moti-vates the strong preference for wireless input devices.

Our decision to focus on applications in immersive envi-ronments poses some other unique challenges. We are con-centrating on providing real-time (or near real-time) inter-action, and therefore system latency is a primary concern.As a general guideline, a delay of less than 150ms betweenmanipulation of the input device and the resultant change inthe software interface is acceptable [9], but there are appli-cations for which a finer resolution (or more accurate timingof the user’s input) is required. The physical characteristicsof the IVE require omnidirectional wireless communication

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Page 2: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

Figure 1. Within an immersive visualiationenvironment (IVE), wireless sensor nodes areused to prototype novel mobile input de-vices, e.g. a conductor’s wand.

at an effective range of at least 12 feet, and our desire to in-corporate this technology into handheld devices imposes asize restriction on the hardware. Finally, it must be possi-ble to connect the system with the computer that controlsthe visualization. It cannot be assumed that a direct physi-cal connection is available, since VR environments are of-ten powered by supercomputers which must be housed in anair-conditioned room that is separate from the graphical dis-play.

There are also a number of usability concerns that mustbe addressed. Our target audience is the software develop-ment community, and therefore we cannot assume that theusers of this system have a strong background in electri-cal engineering; it should be possible to build useful inputdevices using very simple circuitry. Conversely, it is unde-sirable to limit the input sensors a predefined set of simple”plug and play” circuits, since our primary goal is to allowdevelopers to think freely and without the restrictions of ex-ternally enforced preconceptions. The software interfacesshould share this characteristic of simplicity while main-taining flexibility: the developer should not be required tolearn a new programming language in order to use the sys-tem, and our API should integrate well with the code ofexisting immersive applications. The cost of our platformmust also be sufficiently cost-effective to justify its use inpreference to building new input devices from scratch.

To summarize, our prototyping system should have thefollowing characteristics:

• Makes hardware-based input device prototyping ac-cessible to software engineers with little backgroundin electrical engineering.

• Addresses specific requirements for use in the IVE:low latency, omnidirectional wireless communication,small size.

• Allows for easy reconfiguration of both hardware andsoftware for use in many different applications.

• Provides a user interface and/or API that is compatiblewith current standards for 3D input device libraries.

• Low cost compared to alternative approaches.

3. Selection of the wireless sensor platform

Rapid prototyping of physical computer interfaces is nota new concept, and there are many options for developingwireless input devices for IVEs. Here we discuss existingapproaches from the fields of immersive visualization andubiquitous computing, and the rationale behind our deci-sion to base our technology on wireless sensor networks.

3.1. Related work

A system for prototyping wired VR input devices wasfirst proposed in [3] using LegoTMbricks. More recently, asolution provided by InterSense (a leading manufacturer oftracked input devices for immersive environments) exposesan I2C bus on their wireless modules to allow customiza-tion by OEMs [20]. This approach handles the problem oflow-level connectivity to the visualization server, but the de-veloper must still have experience in analog and digital cir-cuit design in order to take advantage of this facility.

One interesting approach to creating rapidly reconfig-urable mobile interfaces is to build a software-based GUIon a PDA or tablet PC that interfaces with the visualizationserver via wireless Ethernet. There are many positive as-pects to this approach–instant reconfiguration via software,for example–but it is important to note that it also suffersfrom significant limitations when used in an immersive 3Denvironment. The physical form factor of the device is in-flexible, and a touch screen interface provides no tactile dif-ferentiation between the controls. In an immersive environ-ment, the best interface is often one that does not requirethe user to look away from the data, and the lack of a physi-cal representation of the PDA’s graphical buttons precludestheir use in many applications [15].

Several research projects have emerged from the ubiqui-tous computing field that are specifically targeted at physi-cal interface prototyping. The Phidgets [11] project aims tocreate a system for developing physical input devices andactuators, primarily in a wired environment. More applica-ble solutions are proposed in [4] and [10], which focus onthe use of wireless networks of mobile sensors to form theinfrastructure for reconfigurable interactive devices.

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Page 3: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

It is important to recognize that the goals of ubiquitouscomputing and immersive visualization are very different,and in some cases, almost in opposition: immersive visu-alization attempts to completely supplant the user’s envi-ronment for a finite period of time, whereas the objectiveof ubiquitous computing is, arguably, to permanently inte-grate computers into everyday objects in order to augmentthe user’s environment without an intrusive computer inter-face. For this reason, some considerations that are of greatimportance in ubiquitous computing applications–such aspower efficiency, security, distributed communication, andcontext awareness–are of marginal interest in VR interfaces.On the other hand, greater attention must be paid to issuessuch as system latency [4], range of wireless communica-tion [2], and the ability to integrate with legacy softwareand hardware.

3.2. Wireless sensor networks

Wireless sensor nodes provide an excellent compromisebetween simplicity and utility, and are sufficiently inexpen-sive that experimentation with these devices does not re-quire a major investment. We examined several sensor nodetechnologies before settling on the MANTIS Nymph, a newwireless sensor network platform that is being developed in-dependent of this project at the University of Colorado.

The first platform that we investigated was the Berke-ley Mica sensor node running TinyOS [12]. The Mica nodefits many of our requirements, such as omnidirectional wire-less communication, small size, and the ability to connect adiverse range of sensors. However, a separate circuit boardmust be attached to the Mica in order to connect simple sen-sors, which increases both the size and complexity of thissolution. TinyOS itself, while well established in the wire-less sensor network community, requires the developer tolearn an unfamiliar event-driven programming model in or-der to customize the system infrastructure, which also raisesthe barrier to entry.

Our first prototype was based on the MIT Handy-Cricket [8], which were designed specifically to sup-port simple hardware prototyping and robotics applica-tions [17]. The HandyCricket’s onboard analog to digitalconverter (ADC) allows resistive sensor circuits to be con-nected directly without an external circuit board, andthe Cricket Logo language is trivial to learn. Unfor-tunately, the HandyCricket proved to be too limitedfor our use: wireless communication is handled via in-frared transceivers, which restricts communication toline-of-sight operation over a very short range. The com-putational abilities of the HandyCricket are very lim-ited as well; slow clock speed, combined with a tiny (lessthan 4 kilobyte) program memory, would restrict future ex-pansion of the platform.

Figure 2. The MANTIS Nymph wireless sen-sor node.

MANTIS [1] is a new mobile sensor node platform thatprovides a sophisticated, multi-threaded operating systemrunning on a small (3.5 x 5.5 x 2 cm) hardware deviceknown as the MANTIS Nymph (Figure 2). The Nymph hasthree on-board sensor ports, wireless communication via a900Mhz radio, and direct serial port connectivity. Like theHandyCricket, the Nymph had an onboard ADC that allowsdirect connection of sensor circuits, without requiring theuser to design and build a custom circuit board. The MAN-TIS operating system (MOS) was designed from the groundup to provide a familiar, UNIX-like programming environ-ment with a simple C API. MOS is also being ported to runon the Berkeley Mica hardware, allowing us the flexibilityto experiment with sensor nodes other than the Nymph. Dueto these advantages, we decided to build our system aroundthe MANTIS platform for this stage in our research.

4. System architecture

Figure 3 shows the six hardware components that com-prise our system. The first components are the sensor cir-cuits themselves, which form the buttons, knobs, wheels,etc., that provide the user with a physical interface to theimmersive application. The second component is the mo-bile Nymph to which the sensor circuits are connected. Thethird component is a base station Nymph, which acts as thecollection point for the data that are generated by the mo-bile Nymph. Connected to the base station via the serialport is the PC, the fourth component, which reads the in-coming data and forwards them to the server which drivesthe VR display. This visualization server is the fifth compo-nent, which receives the data stream from the PC and inter-prets the sensor readings so that the immersive applicationcan respond to the user’s manipulation of the custom input

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Page 4: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

Figure 3. Data flow diagram of the MANTIS-based input device prototyping system.

device. Finally, the server updates the graphics based on thechanges in the immersive application and transmits the im-age to the display device (most commonly, a stereo-capableprojection system), thus completing the cycle.

4.1. Software design

Our software architecture is divided into three stages:

• MANTIS stage: The mobile MANTIS Nymph collectssensor data which is transmitted wirelessly to the PCvia an intermediate, wired Nymph (or base station).

• PC stage: Acts as a conduit between the Nymph stageand the Server stage by reading the sensor data fromthe serial port and retransmitting via TCP/IP to theServer stage.

• Server stage: Daemon running on the visualizationserver which receives the incoming TCP packets andinterprets the sensor data based on per-application con-figuration files.

4.1.1. MANTIS stage. Two separate programs are usedin this stage, one which runs on the mobile Nymph andtransmits sensor readings, and another which runs on thebase station Nymph and listens for incoming packets. Forboth ends of the MANTIS stage, simplicity is of primaryconcern, since mobile sensor nodes are severely resource-constrained in comparison with PCs and servers; therefore,we have offloaded as much of the data processing as pos-sible to the Server stage, allowing the server to handle theinterpretation of the sensor readings.

As stated in Section 2, communication latency is a pri-mary concern for this effort. We can categorize applications

Figure 4. Conceptual stages of the software.

of input devices into two categories: those that require a re-sponse that is as fast as possible, and those that require accu-rate measurement of the timing of events. The first categorywould apply to controllers such as joysticks, button pads,or any other type of device where the user expects an im-mediate response when the controller is manipulated. Thesecond category encompasses devices that are used to sup-port gestural interfaces (such as the conductor’s baton, dis-cussed in Section 5.2), or scientific research applications forwhich the precise time of a user’s reaction must be recorded.

The only solution for the first category of timing require-ments is to ensure that the latency is within acceptable lim-its: the hardware and software architecture must be capableof supporting fast communication between the wireless de-vice and the base station, and between the PC and the visu-alization server. To confirm that our architecture achievesour required latency of less than 150ms, we tested boththe round trip time of communication from the mobile de-vice and the server and the timing of communication be-tween each stage in the system. Our analysis shows that oursystem has a total latency of approximately 120ms for a 6byte packet, with the overwhelming majority of the latencycaused by the Nymph-to-Nymph wireless communication.(The latencies for the Nymph-to-PC and PC-to-server com-munication were found to be 11ms and 3.5µs, respectively.)While this latency falls within our stated requirements (andanecdotal reports from users suggests that many people findthis delay acceptable), it is still much slower than desirable,especially considering delays that will be introduced by thevisualization software itself. We intend to investigate this is-sue further to find the exact source of this slowdown, and todetermine what steps can be taken to reduce the latency.

The second class of applications requires a different ap-proach, especially in conditions where the system latencyis unacceptably slow, or variable due to radio interference.If all that is required is an accurate indicator of the rela-tive timing of sensor events, a simple time stamp can be in-

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Page 5: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

cluded in the packet that is sent from the mobile Nymphto the base station; this technique was used effectively inthe conductor’s baton application. If the time stamp mustbe accurate relative to the system time on the visualizationserver, a domain-specific version of the network time proto-col (NTP) can be used to synchronize the clock on the mo-bile Nymph with that of the server, giving us global timesynchronization across all levels of the system. We are cur-rently exploring the application of this technique to an in-put device that will be used in a study of Alzheimer’s Dis-ease, where an accurate measure of human response timemust be recorded.

To address both of these concerns, the software on themobile Nymph continuously reads and forwards sensordata, sending a 6 byte packet consisting of a packet se-quence number, a 16 bit time stamp (measured in centisec-onds), and three 8 bit sensor readings; we leave the issueof Nymph-server time synchronization for future work. Thesource code that implements this functionality is shown inFigure 5. At present we are also not addressing the issue ofunreliable communication. MOS does support a stop-and-wait reliable networking protocol, which was determined tobe inappropriate for our application given the required max-imum latency of 150ms.

The software on the base station Nymph is quite simple:a single thread listens to the designated radio channel andforwards packets directly to the PC via the serial port.

4.1.2. PC stage. The PC plays the role of a bridge be-tween the Nymph stage and the Server stage. This bridge isnecessary for several reasons, the most important of whichis the issue of physical separation of the visualization serverfrom the IVE, as discussed in Section 2. The practical rangeof a serial connection is only a few meters, so a range ex-tender would be required to connect the base station Nymphdirectly to the server. The second reason is platform inde-pendence: although our system is initially being targeted atSGI-based VR environments, we hope to make this technol-ogy available on a wide variety of platforms. The APIs forreading data from a serial port vary widely across operat-ing systems, but the TCP/IP APIs are much more standard-ized across platforms, which will make porting the technol-ogy much more straightforward.

The PC stage consists of a simple program that listenson the serial port for data from the Nymphs and forwardsthe raw data to the server via TCP. This program also per-forms the function of adjusting for wrap-around of the mo-bile Nymph’s 16 bit time stamp, and converts this value toa 32 bit integer before forwarding the data.

It may seem that the PC stage adds significantly to theexpense of our architecture. While it is true that using a PCas a bridge is not as inexpensive as a serial cable, we wouldargue that the cost is not prohibitive. The TINI board [19],for example, is a Java-based device that provides serial and

Figure 5. The compact application sourcecode that runs on the remote MANTIS sen-sor Nymph.

10 Base-T Ethernet connectivity, and could serve as the se-rial to Ethernet bridge for a cost of $50.

4.1.3. Server stage. The main objective for the Serverstage is to enable the programmer to integrate MANTIS-based input devices into immersive applications in a waythat is not awkward or significantly different from existinginput device APIs. The de facto standard for VR softwaredevelopment is VRCO’s CAVELib library [7], which there-fore gives our measure of acceptability based on the simi-larity of our API to that of CAVELib. The design of our APImirrors that of CAVELib in two ways: the use of a config-uration file to establish library settings upon initialization,and simple “call-and-return” queries of the input device, in-stead of a more elaborate approach (e.g. an event driven sys-tem using callbacks).

The API for the MANTIS-based input device is imple-mented as a C++ object class that acts as a TCP/IP server.Upon instantiation, it establishes a thread which listens forclient (PC) communication on a designated port, and inter-prets the packets received according to the settings in a con-figuration file. Since a Nymph has only three sensor ports,we anticipate that a voltage divider circuit will commonlybe used to support multiple switches on a single sensor port,

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Page 6: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

so our API includes provisions for automatically interpret-ing switch states based on impedance-to-switch mappingsspecified in the configuration file. Methods are also pro-vided for reading the 8 bit value of each sensor and the timeat which the latest sensor reading was taken.

5. Applications

In order to assess the feasibility of our system, we con-structed two novel input devices, corresponding to each ofthe two timing problems discussed in Section 4.1.1. Thefirst input device is a simple gamepad-like controller thatallows the user to navigate and manipulate a virtual scene,and tests our claim that the latency of our system is accept-able in practice for an application that demands fast interac-tion. The second device shows how a gestural interface canbe constructed that uses the relative timing of sensor events,demonstrating our initial approach to the second class oftiming problems. The goal of both applications is to serve asproofs of concept for the ability to rapidly prototype novelcontrollers, and to provide initial evidence of the effective-ness of the input devices in practice.

5.1. Model navigation controller

For the first controller, we constructed a very simple im-mersive application that allows the user to navigate a virtualroom and to adjust the level of lighting in the room (Fig-ure 6). Since this project was our first attempt to use oursystem in practice, we focused on building a suitable in-put device as rapidly as possible, rather than on the noveltyof that device.

Figure 6. Screenshot of the model navigationdemo.

The input device that we chose to construct is modeledafter the control pads commonly found on video game con-soles. An inexpensive plastic box was used to house theNymph, and four buttons were arranged in a diamond pat-tern on the surface, allowing the user to move forward,backward, left, and right (these buttons are connected to avoltage divider circuit, as discussed in Section 4.1.3). A po-tentiometer was mounted on the opposite side of the box,which acts as the lighting controller. The completed inputdevice, shown in Figure 7, was constructed entirely fromoff-the-shelf components in approximately four hours, at atotal cost of approximately $150 (not including the cost ofthe PC conduit). Although we did not perform rigorous usertesting with this input device, anecdotal evidence from userssuggests that the latency is acceptable.

Figure 7. Input device for model navigationdemo. The bottom image shows the con-struction of the device.

An existing CAVELib application was used as the frame-work for our model navigation application. Apart from thechanges that were required to display the desired 3D model,less than 20 lines of code were modified, and only 7 lineswere added to support the custom input device. Figure 8shows the related sections of code: the top section in the fig-ure shows the code that is used to initialize the controller,while the bottom section shows the controller’s code inte-grated with the per-frame update code for the CAVELib ap-plication.

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Page 7: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

Figure 8. Server-side source code for themodel navigation demo. Lines in bold arespecific to the controller’s API.

5.2. Conductor’s baton: a gestural interface

Our second input device demonstrates the utility of oursystem for experimentation with gestural interfaces. Insteadof building a new application from scratch, we incorporatedour device into an algorithmic music application that hadbeen written for a Master’s thesis at the University of Col-orado [16]. The application generates, in real time, a varia-tion on a MIDI sequence by mapping musical segments toregions of a three dimensional chaotic attractor. A new tra-jectory, beginning from a different initial condition, is gen-erated using the same chaotic system. For each new pointgenerated, the technique efficiently finds a containing re-gion in the original attractor and triggers the correspondingmusic segment. The result is that the notes of the originalpiece are played in a new, nonrandom order.

Inspired by work such as the virtual orchestra conduc-tor shown in [5], we built an electronic “conductor’s baton”that uses an accelerometer to detect the motion of the user’shand. A two-axis accelerometer was attached to the end of awooden stick, with X and Y axes connected to different sen-sor ports on the mobile Nymph (Figure 9). In order to cal-

culate a tempo based on the values measured from the ac-celerometer, the server-side program stores the time stampthat is sent when the acceleration of the wand’s tip reachesa peak. At the next peak, the application takes the differ-ence between the old time stamp and the new time stamp,then calculates the tempo in beats per second based on thefrequency of the peaks.

Figure 9. Conductor’s baton.

The construction of the baton took approximately 3hours, including two hours to write the software that inter-prets the tempo and an additional hour to fit the accelerom-eter to the Nymph. The total cost was roughly $180.

6. Future directions

The work that we have discussed above is only the firststep in creating an ideal technology for developing custominput devices for immersive applications. There are a num-ber of directions that we intend to explore, including on-board localization, collaborative interfaces using multiplesensor nodes, and support for haptic feedback.

At present, input devices created with our system arelimited to applications where the physical location of thedevice is not important, because we lack the ability to de-tect the position of the Nymph inside the IVE. It is possibleto build a tracked device by mounting an external trackingmodule onto the enclosure [3], but this requires expensivetracking hardware such as the systems produced by Inter-Sense [20]. We would like to investigate the practicality ofusing on-board localization using ultrasound, as describedin [18].

Haptic feedback has been found to be important for im-mersive interfaces [15]. While the MANTIS Nymph cur-rently lacks the ability to drive any form of actuator, we arelooking into the possibility of adding support for servo con-trol.

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Page 8: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

The applications that we have developed thus far do notextend to designs that would require more than one sensornode, but there are many contexts in which such an inter-face would be appropriate. The MANTIS platform supportsmulti-hop routing and conflict avoidance amongst multipleNymphs, but it remains to be seen how much performancewill suffer when multiple nodes are in competition for theattention of the base station.

Finally, we must perform usability testing of the proto-typing system itself. We claim in this paper that we have re-duced the complexity of building custom input devices suf-ficiently that hardware prototyping is accessible to softwaredevelopers who have limited knowledge of electrical engi-neering, but we have little evidence beyond our own expe-riences to justify this claim. We will also need to test theperformance requirements of these devices in terms of themaximum acceptable response time; as was found in thestudy of 2D interfaces in [9], the requirements for respon-siveness of an interface vary based on the nature of the in-put device, and it stands to reason that the same variationsmay be found in 3D interfaces.

7. Conclusion

Current input device hardware for immersive 3D envi-ronments are expensive, fragile, and usually not wireless,and the variety is limited to a few general purpose con-trollers such as wands and pinch gloves. Since the field isstill in its infancy, it is not known whether or not thesecontrollers are actually the best solutions for the domain,and more experimentation with novel controllers is needed.We have presented a simple, inexpensive solution for rapidprototyping of input devices using the MANTIS wirelesssensor network platform. We discussed the constraints onthe overall system and the strengths and limitations of theNymphs, and shown an effective three-stage approach to in-terfacing a MANTIS with the computer that drives the VRdisplay. The demonstration input device and VR applicationconfirm the feasibility of integration with immersive appli-cations using standard cave programming libraries. Wire-less sensor node-based input devices show promise for realapplications, but more rigorous usability and timing exper-iments are needed. Enhancements such as tactile feedbackand collaborative interfaces using multiple Nymphs are alsounder consideration for continued research.

References

[1] H. Abrach, S. Bhatti, J. Carlson, H. Dai, J. Rose, A. Sheth,B. Shucker, R. Han, “MANTIS: System Support for Mul-timodAl NeTworks of In-situ Sensors,” 2nd ACM Interna-tional Workshop on Wireless Sensor Networks and Applica-tions (WSNA) 2003 (to appear).

[2] D. Avrahami, S. Hudson, “Forming Interactivity: A Tool forRapid Prototyping of Physical Interactive Products,” Pro-ceedings of the conference on Designing interactive systems:processes, practices, methods, and techniques, June 2002.

[3] M. Ayers , R. Zelezni, “The Lego interface toolkit,” Proceed-ings of the 9th annual ACM symposium on User interfacesoftware and technology, pages 97-98, November 1996.

[4] R. Ballagas, M. Ringel, M. Stone, J. Borchers “iStuff: APhysical User Interface Toolkit for Ubiquitous ComputingEnvironments,” Proceedings of the conference on Humanfactors in computing systems, April 2003.

[5] J. Borchers, W. Samminger, M. Muhlhauser, “Conducting arealistic electronic orchestra,” Proceedings of the 14th an-nual ACM symposium on User interface software and tech-nology, pages 161-162, November 2001.

[6] BP Center for Visualization website, http://www.bpvizcenter.com/

[7] CAVELib Programming Reference, http://vrco.com/CAVE_USER/

[8] “Crickets: Tiny Computers for Big Ideas,” http://web.media.mit.edu/˜fredm/projects/cricket/

[9] J. Dabrowski , E. Munson, “Is 100 Milliseconds Too Fast?,”CHI ’01 extended abstracts on Human factors in computersystems, pages 317-318, March 2001.

[10] H. Gellersen, A. Schmidt, M. Beigl “Multi-Sensor Context-Awareness in Mobile Devices and Smart Artefacts,” Mo-bile Networks and Applications, Volume 7, Issue 5, Octo-ber 2002.

[11] S. Greenberg, C. Fitchett, “Phidgets: Easy Development ofPhysical Interfaces through Physical Widgets,” Proceedingsof the 14th annual ACM symposium on User interface soft-ware and technology, November 2001.

[12] J. Hill, R.Szewczyk, A. Woo, S. Hollar, D. Culler, K. Pister,“System Architecture Directions for Networked Sensors,”Proceedings of the ninth international conference on Archi-tectural support for programming languages and operatingsystems, pages 93-104, November 2000.

[13] K. Hinckley, R. Pausch, J. Goble, N. Kassell, “A survey ofdesign issues in spatial input,” Proceedings of the 7th an-nual ACM symposium on User interface software and tech-nology, pages 213-222, November 1994.

[14] R. Jacob, “Human-computer interaction:input devices,”ACM Computing Surveys (CSUR), pages 177-179, March1996.

[15] R. Lindeman , J.Sibert , J. Hahn, “Towards usable VR:anempirical study of user interfaces for immersive virtual en-vironments,” Proceedings of the SIGCHI conference on Hu-man factors in computing systems: the CHI is the limit, pages64-71, May 1999.

[16] J. Marbach, “Real-Time Chaotic Variation of Symbol Se-quences,” Master’s thesis, University of Colorado at Boul-der, July 2003.

[17] F. Martin, B. Mikhak, B. Silverman, “MetaCricket: A de-signer’s kit for making computational devices,” IBM Sys-tems Journal, Vol 39, Nos. 3&4, 2000.

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE

Page 9: Rapid Prototyping of Mobile Input Devices Using Wireless ... · PDF fileRapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James ... cal interface prototyping.The

[18] A. Savvides, C. Han, M. Strivastava, “Dynamic FineGrainedLocalization in AdHoc Networks of Sensors,” Proceedingsof the seventh annual international conference on Mobilecomputing and networking, pages 166-179, July 2001.

[19] “TINI Board,” http://www.ibutton.com/TINI/hardware/index.html

[20] D. Wormell, E. Foxlin, “Advancements in 3D Interactive De-vices for Virtual Environments,” Proceedings of the work-shop on Virtual environments, pages 47-56, May 2003.

Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE