multipurpose prototypes for assessing user interfaces in pervasive computing systems

8
1536-1268/05/$20.00 © 2005 IEEE Published by the IEEE CS and IEEE ComSoc PERVASIVE computing 27 RAPID PROTOTYPING Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems I n pervasive computing systems, prototypes serve several uses and have different require- ments related to those uses. Some proto- types test a pervasive computer system’s physical form—for example, will this wear- able computer be light enough for eight hours of maintenance work or small enough to let a main- tenance worker crawl through a narrow passage inside a machine? The essential features of a pro- totype for this purpose are form factor and weight, but it need not be functional in any interactive way. Other pro- totypes demonstrate whether a proposed technical solution can function at all—for instance, can a signal get from A to B in sufficient time and with suffi- cient strength to fulfill the envi- sioned system’s functional needs? These proto- types must be narrowly functional (perhaps only sending a signal in a full round-trip) but not broadly functional, and they don’t need to simu- late the system’s expected user interface. Prototypes for pervasive system UIs often serve one of three purposes: they communicate design ideas to the development team, management, or customers; they provide a basis for user studies, often with novice users, to identify the design’s usability problems; and they gather information about skilled users’ projected performance. In deciding when to build and test prototypes, the balance between development costs and benefits tilts in favor of easy-to-build prototypes that ful- fill multiple purposes. We’ve developed CogTool to enable low-cost, rapid construction of interactive prototypes that serve all three UI purposes. 1 CogTool’s core pro- totyping technique is storyboarding—specifically, interactive storyboarding using HTML. Rather than covering all possible pervasive systems, Cog- Tool focuses on systems involving deliberate com- mands that the user invokes by some motor action. Such systems include PDAs, cell phones, handheld terminals (such as those used by rental car return personnel), in-vehicle driver informa- tion systems, and certain wearable computers that run desktop- or PDA-like applications. Multiple uses for HTML prototypes Designers can use storyboard-like UI proto- types to communicate design ideas to team mem- bers from software development, management, marketing, sales, and technical writing. Such pro- totypes can create a shared understanding and get various team members’ buy-in for the design’s direction; provide formative information about the UI— that is, identify usability problems early in the development process so team members can In pervasive computing systems, prototypes can communicate design ideas, support user testing, and predict skilled performance. Designers can use CogTool to build HTML storyboards as prototypes for these purposes. Bonnie E. John Carnegie Mellon University Dario D. Salvucci Drexel University

Upload: dd

Post on 27-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems

1536-1268/05/$20.00 © 2005 IEEE ■ Published by the IEEE CS and IEEE ComSoc PERVASIVEcomputing 27

R A P I D P R O T O T Y P I N G

Multipurpose Prototypesfor Assessing UserInterfaces in PervasiveComputing Systems

In pervasive computing systems, prototypesserve several uses and have different require-ments related to those uses. Some proto-types test a pervasive computer system’sphysical form—for example, will this wear-

able computer be light enough for eight hours ofmaintenance work or small enough to let a main-tenance worker crawl through a narrow passageinside a machine? The essential features of a pro-totype for this purpose are form factor and weight,

but it need not be functional inany interactive way. Other pro-totypes demonstrate whether aproposed technical solution canfunction at all—for instance,can a signal get from A to B insufficient time and with suffi-cient strength to fulfill the envi-

sioned system’s functional needs? These proto-types must be narrowly functional (perhaps onlysending a signal in a full round-trip) but notbroadly functional, and they don’t need to simu-late the system’s expected user interface.

Prototypes for pervasive system UIs often serveone of three purposes: they communicate designideas to the development team, management, orcustomers; they provide a basis for user studies,often with novice users, to identify the design’susability problems; and they gather informationabout skilled users’ projected performance. In

deciding when to build and test prototypes, thebalance between development costs and benefitstilts in favor of easy-to-build prototypes that ful-fill multiple purposes.

We’ve developed CogTool to enable low-cost,rapid construction of interactive prototypes thatserve all three UI purposes.1 CogTool’s core pro-totyping technique is storyboarding—specifically,interactive storyboarding using HTML. Ratherthan covering all possible pervasive systems, Cog-Tool focuses on systems involving deliberate com-mands that the user invokes by some motoraction. Such systems include PDAs, cell phones,handheld terminals (such as those used by rentalcar return personnel), in-vehicle driver informa-tion systems, and certain wearable computers thatrun desktop- or PDA-like applications.

Multiple uses for HTML prototypesDesigners can use storyboard-like UI proto-

types to communicate design ideas to team mem-bers from software development, management,marketing, sales, and technical writing. Such pro-totypes can

• create a shared understanding and get variousteam members’ buy-in for the design’s direction;

• provide formative information about the UI—that is, identify usability problems early in thedevelopment process so team members can

In pervasive computing systems, prototypes can communicate designideas, support user testing, and predict skilled performance. Designers canuse CogTool to build HTML storyboards as prototypes for these purposes.

Bonnie E. JohnCarnegie Mellon University

Dario D. SalvucciDrexel University

Page 2: Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems

make improvements before investingsubstantial coding time; and

• provide summative information—thatis, show how much better the new sys-tem will be than the current system.For example, some pervasive com-puting systems replace paper-basedsystems, such as a wearable computerreplacing a paper checklist or an in-vehicle navigation system replacingmaps. Often, the new system will bemore expensive per unit than paper. Areturn on investment analysis that esti-mates metrics such as task perfor-mance time for skilled users can helpconvince management or consumersto invest in the product.

Each purpose delivers different infor-mation to the development team andputs different requirements on the pro-totype, as table 1 shows.

Communicating design ideasTo communicate design ideas to peo-

ple not directly involved in the designprocess, a prototype must convey theplanned interface’s general feeling. Theprototype might be as simple as a seriesof drawings of the different system states(such as screens) that a designer presentsto the rest of the group. The prototypemust convey how users will access infor-mation and perform tasks on the systemso the team can see the envisioned inter-action techniques and accessibility of

information, as well as what performingsome tasks would feel like. Becausedesigners often present their work in per-son, they don’t need to make the story-board interactive or fill in all button andlink labels; designers can describe inter-action and command responses ratherthan fully specify them. Typically, theydon’t need to provide information aboutprojected system response time for useractions because such lower-level infor-mation doesn’t communicate well dur-ing a presentation. However, the proto-type’s look-and-feel is critical: the closerthe storyboard’s visuals are to the finalsystem, the easier it will be for the audi-ence to imagine a user’s interaction withthe system.

With the advent of what-you-see-is-what-you-get HTML editing tools, manydesigners already develop and presenttheir prototypes in HTML. Becausedesigners can incorporate arbitraryimages into HTML, the technology isexpressive enough to create storyboardprototypes that communicate design ideasat whatever level of detail they desire (forexample, hand-drawn sketches or pho-tographs of real devices or components).HTML also allows for simple interactiv-ity with link transitions and basic com-ponents (pull-down menus and textboxes, for example) and annotationsexplaining visuals, labels, procedures, anddesign rationale, such as those used in thecity guide in figure 1. Such prototypes can

also serve as a form of interface require-ments from which designers can writecode. HTML storyboards are easy toshare over the Web (which sometimesunintentionally gives them “legs” as theyare passed around and appear in placesthe original designers didn’t intend). Over-all, HTML storyboards are efficient rep-resentations for producing shared under-standing and buy-in in an organization.

User testingUser testing places different require-

ments on prototypes. As before, design-ers must build the anticipated informa-tion navigation into the prototype becausenavigation schemes are easily and usefullytested by asking users to think aloud as they perform tasks. The design teamdoesn’t have to specify correct or optimalprocedures for each task because usertests often aim to determine whether anovice user can discover a correct path toaccomplish each task. (However, if theuser test includes assessing training mate-rial effectiveness, correct procedures areusually written into the training docu-ments and are thus a requirement for thetest.) The designers must put into the pro-totype their best guesses at button or linklabels, command names, and iconsbecause user tests can uncover ambiguitiesin such items. Although user testing canbe done with noninteractive systems suchas paper prototypes, tests are much easierto administer and the user experience is

28 PERVASIVEcomputing www.computer.org/pervasive

R A P I D P R O T O T Y P I N G

TABLE 1Requirements and products of prototypes for different purposes.

Purpose of the prototype

Communicating Predictive human-Requirements design ideas User testing performance modeling

Information navigation Yes Yes Yes

Correct procedures Yes It depends Yes

Best-guess labels and commands It depends Yes No

Best-guess visuals It depends Yes No

Size and layout of controls It depends Yes Yes

Best-guess response time No No Yes

Interactivity No It helps a lot It depends

Products Team understanding Usability problems of novices Performance time of skilled users

Team buy-in Ideas for revised design Bottlenecks in performance

Page 3: Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems

much closer to actual use if the system isat least minimally interactive. However,because thinking aloud during a taskslows performance, timing measures areunreliable in user tests and typically aren’trecorded. Thus, the prototype doesn’thave to provide a realistic system-re-sponse time.

HTML prototypes are well suited foruser testing. They can include realisticinformation navigation, labels, com-mands, buttons, controls, and visuals.Links that take users to another page inresponse to their action can simulateinteractivity. Because timing measuresaren’t recorded, unrealistic system re-sponse time incurred by the browser isn’ta problem. User tests uncover many per-vasive computing system usability prob-lems even if the prototype is displayed ona desktop browser and operated with amouse. In some cases, designers can evenapproximate the pervasive system’s formfactor using the HTML prototype. Forexample, they can design the HTML todisplay well on Pocket Internet Explorerand thereby deliver a prototype on aPDA. In our lab, we’ve positioned a touchscreen on the dashboard of a driving sim-ulator and used this to prototype in-vehi-cle navigation systems in HTML.

Predictive human performance modeling

Prototyping is also useful for obtain-ing information about skilled users’ per-formance. Doing this empirically isexpensive because measuring skilled per-formance requires users to interact witha robust prototype for hours, days, oreven weeks. However, we can obtainskilled users’ performance time analyti-cally using predictive models of humanbehavior. Such models have been a cen-terpiece of applied cognitive psychologyin human-computer interaction (HCI)for two decades. In the early 1980s, re-

searchers introduced and validated themodel human processor (MHP); key-stroke-level models (KLMs); and goals,operators, methods, and selection rules(GOMS).2 Since that time, other cogni-tive architectures—unified psychologi-cal theories and computational frame-works—have been developed to modelhuman performance at many levels. Forexample, researchers have used ACT-R3

and similar cognitive architectures tosimulate behavior in several complexdynamic domains, including air-trafficcontrol4 and driving.5

Historically, predictive human behav-ior modeling has been done

• manually by a human analyst,

• computationally via faked interactionwith a skeletal prototype, or

• computationally via real interactionwith an implemented prototype.

In the most basic frameworks (such asMHP and KLM), a skilled analyst listsall of the actions the human must per-form and the estimated system responsetime to those actions. The analyst thenadds up the time to perform each listedoperator. When using a computationalcognitive architecture (for example,ACT-R and some versions of GOMS),the analyst can fake the interaction witha prototype simply by programming thearchitecture to wait a certain amount oftime for each manual or perceptual

OCTOBER–DECEMBER 2005 PERVASIVEcomputing 29

Step 1in the maps

method

Several stepslater

Severalstepslater

Step 1 in theGraffiti method

Figure 1. Parts of an HTML storyboard ofan information-finding task on a PalmPDA, annotated with user proceduresand design rationale.

Page 4: Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems

action. The wait duration would corre-spond to established estimates from thepsychology literature. Finally, the ana-lyst might choose to exploit the compu-tational cognitive architecture by con-necting the model to some functionalprototype so that the human simulationand the system simulation (prototype)can work in concert to generate predic-tions of skilled human performance.

In all three types of modeling, design-ers must specify the information navi-gation for the prototype that the humananalyst works from (in the first and sec-ond types) or that the cognitive archi-tecture connects to (in the third type), orthe analysis can’t proceed. The analystmust know the task’s procedures, whichmust be programmed into the cognitivearchitecture as task knowledge to simu-late a user who knows how to use thesystem. The designer need not work outlabels and visuals because skilled userswould have learned even the most unin-tuitive terms or icons. Also, cognitivearchitectures typically function as wellwith a generic label (“Label 1,” forexample) as they do with a semanticlabel for a button or link. (In contrast,some computational cognitive models,such as Marilyn Blackmon, Muneo Kita-jima, and Peter Polson’s comprehension-based linked model of deliberate search,require semantic labels for buttons,menu items, or links to predict the explo-ration behavior of users.6) However,skilled performance time depends on thecontrols’ size and position, so the pro-totype must specify these. All three typesof models require an estimate of system

response time to make accurate predic-tions, but an interactive prototype isneeded only when it’s connected to acognitive architecture.

HTML storyboards fill the require-ments of prototypes for predictinghuman performance. They depict theinterface sufficiently to give informa-tion navigation, correct procedures, andthe controls’ size and layout. They usu-ally depict best-guess labels, commands,and visuals, which aren’t needed byhuman performance models but alsodon’t get in the models’ way. As wedescribe next, we adapted ACT-R tointeract with an HTML prototypethrough a Web browser and created amechanism for specifying estimated sys-tem response time that’s independent ofthe browser’s actual response time orthe cognitive model’s speed.

Thus, HTML storyboards can fulfillthe requirements to predict skilled humanperformance as well as those related tocommunicating design ideas and usertesting. Building HTML storyboards asprototypes therefore fulfills multiple pur-poses for sharing and evaluating pro-posed user interfaces for a large subset ofinteractive pervasive systems.

CogToolCogTool aims to make human per-

formance modeling more accessible andaffordable for computer system design-ers.1 The psychological theory underly-ing CogTool draws from scores of resultsin the psychological literature and hasbeen validated through HCI researchand real-world use over the last 20

years.7 In recent years, we’ve undertakenuser-centered design of CogTool tounderstand UI designers’ knowledge,skills, workflow, and workplace influ-ences; uncover common error patterns;and user-test iterations to improve thedesign. The result is a tool that produceskeystroke-level predictions of humanperformance from a demonstration oftasks on an HTML prototype. Thesepredictions are as accurate as can beexpected from the underlying theory(typically regarded as producing skilledperformance time estimates within 20percent of the actual performance) withan order of magnitude less effort thanwhen done by hand.

Underlying cognitive theoryWe based CogTool on the KLM ver-

sion of a human information processing(HIP) theory of human behavior thatStuart Card, Thomas Moran, and AllenNewell proposed in the early 1980s.2

Figure 2 shows a HIP model, which con-sists of three modules:

• The perception module takes infor-mation from the world (that is, theprototype) and passes it to a cognitivemodule.

• The cognitive module processes theinformation from the perception mod-ule, possibly combining it with infor-mation from other sources (such as along-term memory), and sends com-mands to a motor module.

• The motor module manipulates theprototype to take action in the world.

The HIP theory underlying CogTool isspecific to skilled human behavior withcomputer-based tasks with certain timeparameters. Perception takes hundreds ofmilliseconds (ms) because signals in com-puter-based tasks are assumed to be rela-tively easy to perceive. (For example, per-ception would take longer if the tasks

30 PERVASIVEcomputing www.computer.org/pervasive

R A P I D P R O T O T Y P I N G

Device prototype Human information processing model

HTML storyboardprototype

Size and locationof widgets,

estimated systemresponse time

Motor module

K = typing speedP = Fitts's Law

G = Graffiti speed

Cognitive module

~ 50 msecM = 1,200 msec

Perception module

~~ 100 msec

Figure 2. A human information processing model interacting with a device prototype.

Page 5: Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems

took place in extremely low light forvisual signals or in high noise for auditorysignals.) The motor module in KLMs fordesktop applications includes operatorsfor pressing keys (K), pointing (P), andhoming between the keyboard and themouse (not relevant to many pervasivecomputing systems, so not shown). Toextend this module to PDAs, we added aGraffiti motor operator (G). We set themotor operators’ durations based on psy-chology literature with K dependent on aperson’s typing speed, P related by Fitts’sLaw to the distance moved and the tar-get’s size, and G empirically determinedin an experiment using a Palm PDA.8 Ona desktop, we’d model point-and-click bya K following each P to a target; for per-vasive systems, the K is unnecessarybecause the original formulation of Fitts’sLaw includes physical contact with thetarget,9 analogous to pressing a physicalbutton or tapping with a stylus.

KLM has one mental operator (M)that represents a composite of all unob-servable activities, including cognitiveand perceptual activities such as com-prehending perceptual input and recog-nizing the situation, command recall,and decisions. Card, Moran, and Newellempirically estimated M to be 1,350 ms.In CogTool, we estimate M to be 1,200ms to accommodate the ACT-R archi-tecture, which normally requires a 150-ms look-at operator to accompany eachtypical mental operation.

KLM makes approximate predictionsof human behavior by assuming that askilled person will execute each operatornecessary to complete a given tasksequentially. It adds all times together fora total task time estimate. Predicting thesequence of motor operators is straight-forward, requiring only that an analystknow the task and system sufficientlywell to list the operators needed to per-form the task successfully. Including men-tal operators is more difficult. Card,Moran, and Newell, working with com-

mand-based desktop interfaces, definedfive rules for placing Ms in a KLM. Theserules depend on distinctions such as com-mands, arguments, and cognitive units.CogTool has updated and automated theplacement of M operators by tying themto specific types of widgets found in mod-ern interfaces. If the modeled system’sresponse time makes a user wait (which

in practice means that it exceeds the dura-tion of an M), the KLM includes thewaiting time in the total task time.

In the last two decades, while devel-opers were using KLM pragmatically inHCI, cognitive psychologists were devel-oping computational cognitive architec-tures with more sophisticated explana-tions of human performance. ACT-Rincorporates significantly more detailedtheoretical accounts of human cognition,perception, and performance than a typ-ical KLM. For instance, the ACT-R com-putational engine includes theories of eyemovement and visual attention, motorpreparation and execution, and a theoryof which operations can proceed in par-allel with other operations. In 2003,researchers implemented a KLM-likemodeling language, ACT-Simple, thatcompiles into ACT-R commands.10 Webuilt CogTool on top of ACT-Simple tomarry KLM’s simplicity with the ACT-Rcomputational engine’s predictive power.We added the ability for ACT-R to inter-act directly with HTML storyboard pro-totypes, “seeing” the widgets on theHTML page and operating those wid-gets with ACT-R’s “hands.”

Most work with KLM assumes thatthe user is currently engaged in only one

task. Our research combines separatelydeveloped models of single tasks to pre-dict the multitasking behavior thatresults from performing both taskstogether. Building on experience devel-oping multitasking driving and driverdistraction models solely in the ACT-Rarchitecture,5 we’ve incorporated anACT-R model of driver behavior into

CogTool.11 We can use CogTool to buildprototypes of secondary tasks, such asdialing a cell phone or finding the near-est gas station with an in-vehicle navi-gation system, and integrate them withthe driving model. Although theories ofhow people switch between tasks are stillevolving, the CogTool driving modelapproximates human behavior byswitching to the secondary task whendriving is safe (that is, the vehicle is sta-ble and near the lane center). The ana-lyst must tell CogTool when to switchfrom the secondary task back to dri-ving�for example, after a few buttonpresses, at the end of a subtask, or at theend of the entire task. Task-switchingstrategies can have a large impact on dri-ving performance. The integration withdriving extends CogTool conceptuallyinto a much broader array of interfacesand devices that are used in the contextof real-world multitasking like many oftoday’s pervasive systems.

How designers use CogToolCogTool’s most important aspect is

that designers don’t need to have anyexperience with or knowledge of humanperformance modeling—in fact, the the-ory’s details and models are hidden from

OCTOBER–DECEMBER 2005 PERVASIVEcomputing 31

CogTool has updated and automated

the placement of mental operators by tying

them to specific types of widgets found in

modern interfaces.

Page 6: Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems

the designer. Instead, CogTool providesa simple interface for building interfaces,specifying common tasks on the inter-face, and automatically predicting andanalyzing the behavioral results of skilledusers, with models in the backgroundacting as simulated users.

A designer begins by building anHTML storyboard prototype of the pro-posed interface using CogTool. CogTooladds a palette to Dreamweaver that pro-duces HTML and JavaScript that cancommunicate with CogTool while theanalyst demonstrates a task and withACT-R when a model is run. This palettecontains many familiar UI widgets fordesktop applications�such as buttons,checkboxes, links, and textboxes. It alsocontains tools more suited for prototyp-ing pervasive computing systems, such asimages, hotspots to place on the images,and a Graffiti area. Other palette toolslet designers express nonmanual actionsin a task. A tool that looks like a steeringwheel tells the model to switch back todriving from a secondary task. The eyetool tells the model to visually gatherinformation from a hotspot but nottouch it. The stopwatch inserts a systemresponse-time delay before the HTMLstoryboard page loads. The microphoneand speaker tools let the model speak and

listen to the storyboard, thereby gener-ating a prototype for auditory interfaces.

After creating the HTML prototype,the designer demonstrates the desiredtasks by clicking on widgets in the sto-ryboard. Behind the scenes, CogTool’sBehavior Recorder records and stores allinteractions with the HTML proto-type.12 (The Behavior Recorder doesn’ttime the designer’s actions during thesedemonstrations; the designer could clickon one widget, have a cup of coffee, thenclick on the next widget, and the result-ing model of skilled performance wouldbe the same as if the designer didn’t takea coffee break.) The designer then tellsCogTool whether the prototype is of amouse-based desktop system or a systemthat uses a stylus (or finger) as its pri-mary interaction mode.

For standard desktop systems, Cog-Tool translates the demonstrated clicksto mouse and keyboard actions, but formore general pervasive systems, it inter-prets clicks as taps with a stylus, Graf-fiti strokes, or physical depressions of abutton or similar component (such as ona phone or navigation device). CogToolalso places mental operators accordingto rules derived from Card, Moran, andNewell’s results but updated to apply tothe widgets in the CogTool palette. The

demonstration’s end result is a modelexpressed in ACT-R. When run, ACT-R interacts directly with the HTML pro-totype, “seeing” the widgets on the pageand “pressing” them, thereby operatingthe prototype and producing a trace ofskilled user behavior with quantitativepredictions of performance time.

If the designer wishes to evaluate theinterface while driving, he or she selectsthe Driving + Task Model option in theModel Launcher. A window opens,showing a road from a driver’s view-point and a wire frame of the interfaceon the dashboard (figure 3). The modeldrives the car down the road and per-forms the secondary task, switchingtasks as described previously. The result-ing performance predictions are typicalof empirical driver-distraction studies—for example, vehicle lateral deviationfrom lane center and brake-responsedelay from the onset of an external event(such as lead-car braking). Obtainingsuch results from CogTool requires noadditional effort from the designer,except that it is sometimes useful to des-ignate different reasonable points atwhich the driver switches between inter-face use and driving to explore theboundaries of safe and reckless attentionto the road.

Experiences in pervasive computingWe’ve used CogTool as a prototyping

and evaluation tool in various pervasivecomputing tasks. In fall 2004, 27 under-graduate and graduate students inCarnegie Mellon’s HCI Methods classused CogTool to model different meth-ods of entering a new appointment intoa PDA calendar. Graduate students in aspring 2004 cognitive modeling class usedCogTool to model common tasks on aPalm PDA and an in-vehicle navigationdevice. One student created more than300 models to explore strategies for error

32 PERVASIVEcomputing www.computer.org/pervasive

R A P I D P R O T O T Y P I N G

Wireframe of the device.ACT-R's eyeswill look at

it and its hand willmove to it when doing

the secondary task

Driver View

Position of ACT-R's righthand when it is on the wheel

The far point whereACT-R's glance resides

when it monitors driving

Device wireframe.ACT-R's eyes will look at it, and the

right hand will move to it when doing the

secondary task.

ACT-R's hand in transit to the cell phone

Figure 3. Driver view of the CogToolinterface showing a user performing asecondary task while driving (the primarytask).

Page 7: Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems

detection and correction in cell phoneshort-message-service-like text entryusing both multitap and T9 (www.T9.com). One software engineeringresearcher used CogTool to predict theskilled use of a Palm application to findinformation about the MetropolitanMuseum of Art (see figure 1) and checkedher predictions with a user study.13 In herstudy, people accessed the requested infor-mation using one of four methods: mapnavigation, query by a soft keyboard,query by Graffiti, and locating themuseum in a scrollable list. CogToolquickly generated a prototype of the inter-face and demonstrated the four tasks toproduce predicted performance results.CogTool’s performance correspondedextremely well with human user results,with an average absolute error of less than8 percent.

Finally, we used CogTool to replicateearlier results that predicted driver dis-traction with cell phone dialing modelswritten by expert ACT-R modelers.10 Wecreated mock-ups of a simple cell phoneand demonstrated a 7-digit dialing task,assuming that drivers switched back todriving between the 3- and 4-digit blocksof the full number. Simply by asking Cog-Tool to run this as a secondary task whiledriving, we could simulate this behaviorand gather performance measures of theinteractive effects of dialing and driving.Figure 4 shows that the tool’s predictionsmatch well with the human data. Drivingslightly increases the total time needed tocomplete dialing, and dialing significantlyaffects the vehicle’s lateral deviation (thedistance from lane center as a measure ofsteering accuracy). CogTool thus pro-duced the earlier expert findings withonly minutes of prototyping and simula-tion, hiding the complex models’ detailsfrom the designer and simply presentingthe final measures for analysis.

Because CogTool simplifies themodeling process from a pro-gramming task, which involvesspecifying human behavior at

the eye- and finger-movement level, to ademonstration task, it makes humanperformance modeling more accessibleto pervasive and interactive systemdesigners. Because ACT-R alreadyincludes several modalities (such asvision and audition or manual andspeech acts) and is relatively easy toextend, CogTool will grow as a model-ing tool for pervasive computing systemsas well as traditional desktop systems.You can find CogTool documentationand downloads at www.cs.cmu.edu/~bej/cogtool. We look forward to hear-ing about your experiences and improv-ing CogTool to meet your needs.

ACKNOWLEDGMENTSWe thank Peter Centgraf, Alex Eiser, Mike Horowitz,and Gus Prevas for their work on CogTool; Lu Luo forher Palm PDA model and data; and the students inHCI Methods and Cognitive Models for HCI atCarnegie Mellon University for their efforts usingCogTool for modeling assignments.

Bonnie John was supported by US Office of NavalResearch grant N00014-03-1-0086. Dario Salvucciwas supported by US Office of Naval Research grantN00014-03-1-0036. The views and conclusionscontained in this document are those of the authorsand should not be interpreted as representing theofficial policies, either expressed or implied, of theUS Office of Naval Research or the US government.

REFERENCES1. B.E. John et al., “Predictive Human Per-

formance Modeling Made Easy,” Proc.Human Factors in Computing Systems:CHI 2004 Conf., ACM Press, 2004, pp.455–462.

2. S. Card, T. Moran, and A. Newell, The Psy-chology of Human-Computer Interaction,Lawrence Erlbaum, 1983.

3. J.R. Anderson et al., “An Integrated The-ory of the Mind,” Psychological Rev., vol.111, no. 4, 2004, pp. 1036–1060.

4. N.A. Taatgen and F.J. Lee, “ProductionCompilation: A Simple Mechanism toModel Complex Skill Acquisition,” HumanFactors, vol. 45, no. 1, 2003, pp. 61–76.

5. D.D. Salvucci, “Predicting the Effects of In-Car Interface Use on Driver Performance:An Integrated Model Approach,” Int’l J.Human-Computer Studies, vol. 55, no. 1,2001, pp. 85–107.

6. M.H. Blackmon, M. Kitajima, and P.G. Pol-son, “Tool for Accurately Predicting Web-site Navigation Problems, Nonproblems,Problem Severity, and Effectiveness ofRepairs,” Proc. Human Factors in Com-puting Systems: CHI 2005 Conf., ACMPress, 2005, pp. 31–40.

7. B.E. John and D.E. Kieras, “Using GOMSfor User Interface Design and Evaluation:Which Technique?” ACM Trans. Com-puter-Human Interaction, vol. 3, no. 4,1996, pp. 287–319.

8. M.D. Fleetwood et al., “An Evaluation of TextEntry in Palm OS-Graffiti and the Virtual Key-board,” Proc. Human Factors and Ergonom-ics Soc., 46th Ann. Meeting, CD-ROM,Human Factors and Ergonomics Soc. 2002.

9. P.M. Fitts, “The Information Capacity ofthe Human Motor System in ControllingAmplitude of Movement,” J. ExperimentalPsychology, vol. 47, 1954, pp. 381–391.

10. D.D. Salvucci and F.J. Lee, “Simple Cogni-

OCTOBER–DECEMBER 2005 PERVASIVEcomputing 33

Dialing only

876543210

Dialing while driving

Dial

ing

time

(sec

)

Normal driving

6

5

4

3

2

1

0Driving while dialing

CogToolHuman

CogToolHuman

Late

ral d

evia

tion

(met

ers)

Figure 4. Comparison of CogTool andhuman user predictions of the effects of(a) driving on the time spent dialing acell phone and (b) dialing a cell phone ona vehicle’s deviation from the lane center.

Page 8: Multipurpose Prototypes for Assessing User Interfaces in Pervasive Computing Systems

tive Modeling in a Complex CognitiveArchitecture,” Proc. Human Factors inComputing Systems: CHI 2003 Conf.,ACM Press, 2003, pp. 265–272.

11. B.E. John et al., “Integrating Models andTools in the Context of Driving and In-Vehicle Devices,” Proc. 6th Int’l Conf. Cog-nitive Modeling, Lawrence Erlbaum, 2004,pp. 130–135.

12. K.R. Koedinger, V. Aleven, and N. Heffer-nan, “Toward a Rapid Development Envi-ronment for Cognitive Tutors. ArtificialIntelligence in Education: Shaping theFuture of Learning through Intelligent Tech-nologies, ” Proc. Conf. Artificial Intelli-gence in Education, IOS Press, 2003, pp.455–457.

13. L. Luo and B.E. John, “Predicting Task Exe-cution Time on Handheld Devices Using theKeystroke-Level Model, Proc. Human Fac-tors in Computing Systems: CHI 2005Conf., ACM Press, 2005, pp. 1605–1608.

For more information on this or any other comput-ing topic, please visit our Digital Library at www.computer.org/publications/dlib.

34 PERVASIVEcomputing www.computer.org/pervasive

R A P I D P R O T O T Y P I N G

the AUTHORS

Bonnie E. John is a professorin the Human-ComputerInteraction Institute atCarnegie Mellon University.Her research focuses on mod-eling human performance toproduce quantitative predic-tions of performance with less

effort than prototyping and user testing. She alsobridges the gap between HCI and software engi-neering by including usability concerns in soft-ware architecture design. She received her PhD inpsychology from Carnegie Mellon University andis a member of the CHI Academy. Contact her atthe Human-Computer Interaction Inst., CarnegieMellon Univ., 5000 Forbes Ave., Pittsburgh, PA15213; [email protected]; www.cs.cmu.edu/~bej.

Dario D. Salvucci is an assis-tant professor of computerscience at Drexel University.His research explores howcomputers can represent,predict, and respond tohuman behavior with cogni-tive models, particularly as

applied to driving and driver distraction. Hereceived his PhD in computer science fromCarnegie Mellon University. Contact him at theDept. of Computer Science, Drexel Univ.,Philadelphia, PA 19104; [email protected];www.cs.drexel.edu/~salvucci.

Product DisplayJohn RestchackPhone: +1 212 419 7578Fax: +1 212 419 7589Email: [email protected]

Recruitment DisplayMid Atlantic (recruitment)Dawn BeckerPhone: +1 732 772 0160Fax: +1 732 772 0161Email: [email protected]

New England (recruitment)Robert ZwickPhone: +1 212 419 7765Fax: +1 212 419 7570Email: [email protected]

Southeast (recruitment)Thomas M. FlynnPhone: +1 770 645 2944Fax: +1 770 993 4423Email: [email protected]

Midwest/Southwest (recruit-ment)Darcy GiovingoPhone: +1 847 498-4520Fax: +1 847 498-5911Email: [email protected]

Northwest/Southern CA (recruit-ment)Tim MattesonPhone: +1 310 836 4064Fax: +1 310 836 4067Email: [email protected]

Japan (recruitment)Tim MattesonPhone: +1 310 836 4064Fax: +1 310 836 4067Email: [email protected]

Europe (recruitment) Hilary TurnbullPhone: +44 1875 825700Fax: +44 1875 825701Email:[email protected]

A D V E R T I S E R / P R O D U C T I N D E X

O C T O B E R - D E C E M B E R 2 0 0 5

Advanced Three-Dimensional Television System Technologies 5

Charles River Media 14

Filter UK 7

IBM Research 5

IO2 Technology 5

Newbury Networks 4

Realm Systems 6

Sharp 6

Boldface denotes advertisements in this issue.

Advertising Personnel

Advertiser / Product Page Number

Marion DelaneyIEEE Media, Advertising DirectorPhone: +1 212 419 7766Fax: +1 212 419 7589Email: [email protected]

Marian AndersonAdvertising CoordinatorPhone: +1 714 821 8380Fax: +1 714 821 4010Email: [email protected]

Sandy BrownIEEE Computer Society,Business Development ManagerPhone: +1 714 821 8380Fax: +1 714 821 4010Email: [email protected]

Advertising Sales Representatives

IEEE Pervasive Computing10662 Los Vaqueros CircleLos Alamitos, California 90720-1314Phone: +1 714 821 8380; Fax: +1 714 821 4010http://[email protected]