evaluating two aspects of direct manipulation in …~ [hi ’92 may3-7, 1992 evaluating two aspects...

8
~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer and Manuel A. Pt!rez Human-Computer Interaction Lab Naval Research Laboratory Washington, D. C. 20375-5000 (202) 767-2774, [email protected] ABSTRACT Increasing use of automation in computer systems, such as advanced cockpits, presents special challenges in the design of user interfaces. The challenge is particularly difficult when automation is intermittent because the interface must support smooth transitions from automated to manual mode. A theory of direct manipulation predicts that this interface style will smooth the transition. Interfaces were designed to test the prediction and to evaluate two aspects of direet manipulation, semantic distance and engagement. Empirical results supported the theoretical prediction and also showed that direet engagement can have some adverse effeets on another concurrent manual task. Generalizations of our results to other complex systems are presented. KEY WORDS: Direct manipulation, interface styles, interface design, adaptive automation, intermittent automation, aircraft interfaces, intelligent cockpit. INTRODUCTION The rapid evolution of user interfaces is occurring not only in office systems but also in modem cockpits, which are computer-based and include advanced graphical displays [14]. However, modem cockpits differ from traditional office systems in several fundamental ways. FirsL unlike office systems, they often include sophisticated automation, such as the ability to fly on automatic pilot. Moreover, unlike office applications, the cockpit application is dynamic and complex. The pilot must not only handle large quantities of red-time, often continuous, input daw he must also perform several demanding tasks concurrently, usually under severe timing constraints, Finally, unlike users of office systems who typically communicate via electronic mail, the pilot of a modem cockpit communicates in real-time via networked voice and data links. Given these differences, the cockpit interface presents many design challenges that the developers of office systems seldom encounter. 1992 ACM 0-89791-513-5/92/0005-0127 An important question in designing the user interface of modem cockpits is how to handle automation. Our researeh is part of a larger researeh program in adaptive automation whose role is to allocate tasks between the pilot and the computer system in an optimal manner [10]. In adaptive (i.e., intermittent) automation, the pilot performs a task only intermittently. Given a dual task situation, arise in the level of difficulty of one task causes automation of the second task. Having the computer system take over the second task allows the pilot to focus his efforts on the increased difficulty task. Once the difilculty level of the fmt task returns to normal, the pilot resumes control of both tasks. Such an approach to automation is expected to result in better overall pilot/system performance [10]. Because the pilot only performs the first task intermittently, a challenging problem, and the problem that this paper addresses, is how to design an interface that supports a smooth transition from automated to manual mode. This paper presents the results of our empirical research on interface styles for adaptive automation. Our research is designed to test predictions from a theory of direct manipulation. A fundamental goal of the research is to determine whether a direet manipulation interface has performance benefits in adaptive automation; i.e., does direet manipulation lead to improved performance when a pilot must quickly resume a task that has been previously automated? A related goal is to separate and evaluate two aspects of direct manipulation identified by the theory, namely, distance and engagement. In this paper, we introduce the direct manipulation theory, present our hypothesis about the effect of interface style in adaptive automation, deseribe the interfaces developed to test our hypothesis, and summarize the empirical results. We conclude with a discussion of the implications of our results. BACKGROUND Designing an interface for an adaptive system involves manv issues and decisions. but little theoretical widance or empirical information is available. There ‘is general agreement on what the interface should accomplish. As a first priority, the interface should enable the pilot to maintain both situational awareness and system control 127

Upload: others

Post on 15-Mar-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Evaluating Two Aspects of Direct Manipulation in …~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer

~ [HI ’92 May3-7, 1992

Evaluating Two Aspects of Direct Manipulationin Advanced Cockpits

James A. Ballas, Constance L. Heitmeyer and Manuel A. Pt!rez

Human-Computer Interaction LabNaval Research Laboratory

Washington, D. C. 20375-5000(202) 767-2774, [email protected]

ABSTRACTIncreasing use of automation in computer systems, such asadvanced cockpits, presents special challenges in the designof user interfaces. The challenge is particularly difficultwhen automation is intermittent because the interface mustsupport smooth transitions from automated to manualmode. A theory of direct manipulation predicts that thisinterface style will smooth the transition. Interfaces weredesigned to test the prediction and to evaluate two aspectsof direet manipulation, semantic distance and engagement.Empirical results supported the theoretical prediction andalso showed that direet engagement can have some adverseeffeets on another concurrent manual task. Generalizationsof our results to other complex systems are presented.

KEY WORDS: Direct manipulation, interface styles,interface design, adaptive automation, intermittentautomation, aircraft interfaces, intelligent cockpit.

INTRODUCTIONThe rapid evolution of user interfaces is occurring not onlyin office systems but also in modem cockpits, which arecomputer-based and include advanced graphical displays[14]. However, modem cockpits differ from traditionaloffice systems in several fundamental ways. FirsL unlikeoffice systems, they often include sophisticatedautomation, such as the ability to fly on automatic pilot.Moreover, unlike office applications, the cockpitapplication is dynamic and complex. The pilot must notonly handle large quantities of red-time, often continuous,input daw he must also perform several demanding tasksconcurrently, usually under severe timing constraints,Finally, unlike users of office systems who typicallycommunicate via electronic mail, the pilot of a modemcockpit communicates in real-time via networked voice anddata links. Given these differences, the cockpit interfacepresents many design challenges that the developers ofoffice systems seldom encounter.

1992 ACM 0-89791-513-5/92/0005-0127

An important question in designing the user interface ofmodem cockpits is how to handle automation. Ourreseareh is part of a larger researeh program in adaptiveautomation whose role is to allocate tasks between thepilot and the computer system in an optimal manner [10].In adaptive (i.e., intermittent) automation, the pilotperforms a task only intermittently. Given a dual tasksituation, arise in the level of difficulty of one task causesautomation of the second task. Having the computersystem take over the second task allows the pilot to focushis efforts on the increased difficulty task. Once thedifilculty level of the fmt task returns to normal, the pilotresumes control of both tasks. Such an approach toautomation is expected to result in better overallpilot/system performance [10]. Because the pilot onlyperforms the first task intermittently, a challengingproblem, and the problem that this paper addresses, is howto design an interface that supports a smooth transitionfrom automated to manual mode.

This paper presents the results of our empirical research oninterface styles for adaptive automation. Our research isdesigned to test predictions from a theory of directmanipulation. A fundamental goal of the research is todetermine whether a direet manipulation interface hasperformance benefits in adaptive automation; i.e., doesdireet manipulation lead to improved performance when apilot must quickly resume a task that has been previouslyautomated? A related goal is to separate and evaluate twoaspects of direct manipulation identified by the theory,namely, distance and engagement. In this paper, weintroduce the direct manipulation theory, present ourhypothesis about the effect of interface style in adaptiveautomation, deseribe the interfaces developed to test ourhypothesis, and summarize the empirical results. Weconclude with a discussion of the implications of ourresults.

BACKGROUNDDesigning an interface for an adaptive system involvesmanv issues and decisions. but little theoretical widance orempirical information is available. There ‘is generalagreement on what the interface should accomplish. As afirst priority, the interface should enable the pilot tomaintain both situational awareness and system control

127

Page 2: Evaluating Two Aspects of Direct Manipulation in …~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer

(H1’92 May3-7, 1992

[10]. We define situational awareness as the extent towhich the pilot has the knowledge needed to perform aspecified task or tasks. Clearly, this knowledge dependsupon the specific state of the aircraft and selected aspects ofthe aircraft environment. In adaptive (i.e., intermittent)automation, the pilot shifts from manually performing atask to monitoring its automated performance and thenback to manual operation. In this situation, the key to

assessing situational awareness is how well the pilot canresume a task that has been previously automated. Weclaim that a critical factor in achieving a smooth transitionfrom automated to manual performance of a task isinterface style.

Hutchins, Hollan, and Norman (J3HN) have developed atheory of direct manipulation [6]. They characterize directmanipulation interfaces according to a model worldmetaphou the user interacts with an interface thatrepresents the task domain itself, the domain objects andthe effect of user operations on those objects. Commandlanguage interfaces behave according to a conversationalmetapho~ the user and the interface have a conversationabout the application domain. The interface acts as anintermediary between the user and the domain. Althoughtypically associated with office computer systems, directmanipulation is also being considered for large safety-criticaI systems, such as nuclear power plants [2]. HHNconcluded that ‘two aspects of direct manipulation accountfor its performance advantages, low distance and directengagement. The first aspect is the “informationprocessing distance between the user’s intentions and thefacilities provided by the machine”. Performanceadvantages come with less distance, because there is lesscognitive effort needed to understand and manipulate thedomain objects. HHN call such an interface semanticallydirect and claim that it can be achieved by “matching thelevel of description required by the interface language to thelevel at which the person thinks of the task”.

Distance is of two types, semantic and articulator.Semantic distance is the difference between the user’sintentions and the meaning of the expressions available inthe interface, both expressions that communicate the user’sintentions to the computer and expressions whereby thecomputer system provides user feedback. For example, ifthe user wishes to delete all files whose names end in textand the computer system (e.g., the Macintosh) has nosingle expression for this purpose, then significantsemantic distance exists between the user’s intentions andthe expressions available in the interface. Articulatordistance is the difference behveen the physical form of theexpressions in the interface and the user’s intentions. Forexample, when a Unix user wants to display a file and todo so he must invoke a command named “cat”, significantarticulator distance exists between the name of the Unixcommand and the intended user operation. Our studies havefocused on semantic distance. We have proposed followupstudies to investigate issues concerned with articulatordistance.

The second aspect of direct manipulation is engagement,i.e., the involvement that comes when the user is able tointeract directly with the application domain and the objectswithin it rather than interacting through an intermediary,The key to direct engagement is interreferential I/0, whichpermits “an input expression to incorporate or make use ofa previous output expression”. For example, if a listing offile names are displayed on the screen, one of these namescart be selected and operated on without entering the nameagain. In Draper’s view [5], the important aspect ofinterreferential I/O is that the user and the computer systemshare a common communications medium. This takes thenotion of interreferential I/O beyond the Unix concepts ofchannels and pipes. In direct manipulation, the sharedmedium is usually a visual display which presents anexplicit, often graphical, view of the task domain.

Related Ressarch on Dkact ManipulationAn early study comparing several interfaces concluded thatusability depends more on specific interface design thaninterface style [12]. Contrary to expectations, iconicinterfaces were inferior to menu systems and commandlanguage interfaces for new and transfer users. Mom recentstudies have generally shown advantages for directmanipulation over command language interfaces [15]. Forexample, Karat [7] found consistently faster times forseveral file management tasks in a direct manipulationinterface that used pointing and dragging operations oniconic representations of fdes. However, Karat did find anadvantage for the command language interface on oneparticular type of file management task. Thus, evaluationsof interface styles need to be sensitive to task-specificeffects. Along this line, Elkerton and Palmiter (cited inKieras [8]) suggest that the basic principle of directmanipulation lies in the replacement of complex cognitiveoperations with perceptual and motor activities. Thus theadvantage of direct manipulation may lie in tasks withcomplex cognitive operations that can be transformed intomotor and perceptual operations.

Research on direct manipulation has been mostly onconventional applications, such as word processing and fdemanagement. A notable exception is a study by Bensonand her colleagues [3] which compared a conventionalinterface to a direct manipulation interface for a pactsmanufacturing system. The conventional interface usedmenus, function keys, typed commands, displayed textualinformation, and paged displays. The direct manipulationinterface used a mouse as the only input device andprovided a continuous display of important information.The evaluation of these interfaces used performancemeasures relevant to manufacturing, such as cost,inventory levels and status, and late deliveries.Performance with direct manipulation was superior on threeof five dependent measures.

All previous research on direct manipulation has notattempted to tease apart semantic distance and directengagement, and determine which is important in userperformance. Furthermore, previous research has evaluatedapplications designed for pwposes other than evaluation of

128

Page 3: Evaluating Two Aspects of Direct Manipulation in …~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer

*7 CHI’92 May3-7, 1992

an interface style. The interfaces in our study were

designed specifically to study direct manipulation byseparating and evaluating the two aspects identified in theHHN theory.

Direct Manipulation in the CockpitThe effectiveness in the cockpit of a direct manipulationinterface and its two aspects remains an open question.Some studies suggest that navigation displays shouldpresent a model world to the pilot. For example, in 1987,Marshak, Kuperman, Ramsey, and Wilson [9] found thatmoving-map displays in which the viewpoint is similar towhat would actually be seen by looking outside the planeled to improved performance. However, in other kinds ofdisplays, a graphical representation of the model world doesnot provide an advantage. For example, Reising andHartsock [11] found that in warning/caution/advisorydisplays, a schematic of the cockpit showing the controlsthat were needed to handle art emergency did not improveperformance. The important factor in improvedperformance was a checklist of the required procedures(which is closer to what a command language interfacewould offer).

Ironically, in modem flight control systems, some trendshave been away from direct manipulation. For example,fly-by-wire systems remove the pilot from direct control ofwing surfaces. Bemotat [4] argues against this trend,suggesting that, in such systems, the pilot needs directsensory feedback about the aircraft’s performance. Suchfeedback is consistent with the notion of directmanipulation. Other trends in cockpit controls suggest amove toward direct manipulation, e.g., the incorporation oftouchscreen displays. However, the incorporation ofpointing devices into the flight deck needs to be carefullyevaluate~ e.g., what is the effect of the pilot’s use of twopointing devices concurrently (a touchscreen and ajoystick)?

Direct Manipulation In Intermlttant AutomationAn issue in interface design for intermittent automation isautomation dejicit, the initial decrease in pilot performancethat occurs when a task that has been previously automatedis resumed. This deficit may reveal itself in several wayxslower human response, less accurate human response,subjective feelings of not being in control, subjectivefeelings of stress, etc. Some previous studies have shownan automation deficit for manual control tasks, whileothers have not [10]. In our research we are interested inautomation deficits in response time and the effect ofinterface style on automation deficit.

Our hypothesis is that direct manipulation interfaces lead toa reduction in automation deficit that is reflected indecreased response times right after automation ceases. Therationale underlying this hypothesis is that decreasedsemantic distance and improved direct engagement enhancea pilot’s ability to monitor a task that is automated andthen to quickly resume the task. Besides testing thegeneral hypothesis, we evaluated the importance of each

aspect of direct manipulation in minimizing automationdeficit.

To test our hypothesis we evaluated the effect of interfacestyles on a person’s ability to resume a task quickly after aperiod of automation. More specifically, we comparedperformance. in the fmt few seconds of the manual mode toperformance a minute later on, using different types ofintert%ces.

EXPERIMENTAL DESIGN

SubjectsTwenty subjects were recruited from NRL personnel, withfive randomly assigned to each of the four types ofinterfaces used in the tactical assessment task. All werescreened for normal color vision. Two of the subjects werelicensed pilots.

General ProcedureThe experiment required subjects to perform two tasks, apursuit tracking task and a tactical assessment task. Toestablish a setting for adaptive automation, the difficulty ofthe tracking task alternated between moderate and highthroughout the experiment During the moderate difficultyphases of the tracking task, the subject performed both thetracking task and the tactical assessment task. Each timethe difficulty of the tracking task rose to high, the tacticalassessment task was automated, and the subject performedthe tracking task only. The display screen used in theexperiment was partitioned into two nonoverlappingwindows, one for the tracking task, the other for thetactical assessment task.

Tracking TackThe tracking task simulated air-to-air targeting of an enemyaircraft using a gunsight similar to the pipper and mlicle ona typical head-up display. The target on the display was agraphical representation of an enemy aircraft. The tracking

control was a self-centering, displacement joystick. Thetwo levels of tracking difficulty, high and moderate, wereproduced by changing the movement rate of the target.Performance measures included RMS amplitude calculatedfor each axis. In addition, the target’s and the subject’smovements were recorded for later analysis.

Tactical Assessment TaskThe second task, tactical assessment, is a critical task in atactical aircraft and one that has become more challengingwith the increased capabilities of modem aircraft. Ourhypothesis was tested on interfaces for the tacticalassessment task. The simulated tactical situation includedthree classes of targets - fighters, aircraft, and ground-basedmissiles - and contacts on the targets by sensor systems.The targets frost were designated as possible threats usingblack color coding, but as they got closer to the owrwhip(the symbol for the aircraft the pilot was in), they weredesignated as neutral, hostile, or unknown, using blue, redand amber color coding, respectively. The subjects weretold that simulated sensor systems were assigning thesedesignations.

129

Page 4: Evaluating Two Aspects of Direct Manipulation in …~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer

~7(H1’92 May3-7, 1992

The subjects were required to perform two operations,confirm and classify. If the system designated a target asneutral or hostile (i.e., the target was colored blue or red),the subject had to confirm the designation by picking thetarget and then indicating the proper designation, i.e.,neutral for blue targets and hostile for red targets. Thus,confii decisions only required the subject to discriminatecolors. If the system designated the target as unknown(i.e., the target was colored amber), the subject had toclassify the target as hostile or neutral based on itsbehavior. Table 1 provides the rules for designating atarget as hostile or neutral. The target class determineswhat target attribute the subject uses to determine thetarget’s designation.

Target Class Hostile Neutral

Fighter Constant bearing Bearing awayAirplane Airspeed-800 Airspeed-3oo

Missile site Within threat Outside threatrange range

Table 1. Rules for tactical assessment of targets

To classify the amber targets, the subject needed to monitorheading for fighters, speed for aircraft, and projected lateraidistance for ground missile threats. The responses weretimed and analyzed to produce measures of accuracy andresponse time.

Training was provided on each task alone and on the dualtask without automation. A total of twenty subjects weretested on the intermittent automation, five on each of thefour interfaces. Twelve subjects were retested four monthslater. They received a 3 minute retraining session on bothtasks. Further details of the experiment are presentedelsewhere [1].

Interfaces for the Tactical Assesment TaskTo test our hypothesis, we designed and built fourinterfaces, using prototyping and iterative development.These four interfaces, which include a direct manipulationinterface, a command language interface, and two hybridinterfaces, represent the four combinations of semanticdistance and engagement shown in Figure 1. Below, webriefly describe each interface and discuss how eachimplements some combination of semantic distance andengagement.

The direct manipulation inte~ace (Figure 2a) has directengagement and low semantic distance. It uses a sharedcommunications medium: both the subject and thecomputer use the entire tactical assessment window tocommunicate. ‘I$is interface simulates a radar display withcontinuously mbving symbols representing the targets.The symbol us~d to represent a target is an intuitivegraphical representation of the target class. Each target

d

symbol is initial y colored black but changes to red, blue,or amber once e system assigns the target a designation.A touchscreen overlays the display. The subject confirmsor classifies a tii.rget by picking a target symbol on thedisplay and seledtirtg one of two strips, labeled HOSTILEand NEUTRAL, located on either side of the display. The

subject accomplishes both the pick and the select bytouching the appropriate part of the display screen. Thewords ‘HOSTILE and ‘NEUTRAL’ in the two side stripsare colored red and blue, respectively. For classifydecisions, the subject needs to observe the behavior of thegraphical symbol that represents the target to determine theproper target designation. For confirm decisions, thesubject needs to interpret the color of the target symbol.

Semantic Distance

Low High

E@ Direct

Graphical Oisplaywith Touchecreen Tabular Oisplay

~ (Direct Mmipulation)with Touchecreen

%’~ Indirect Graphical Oisplay

Tabular Display

wiih Keypad Inputwith Keypad input

LLl(CommendLanguage)

Figure 1. Levels of engagement and semantic distance inthe four interfaces for the tactical assessment task.

(a)GraphicalDisplaywith Touchecfeen(DirectMenipulalion)

H

4

ifz +ii3

+

i. nsutrsl ~

(b) TabularOisplaywilh Touchecreen

T* ❑ 4m. Bfm V.1 APP 1*1 mm~l2 SAM Ur 1

a SAW m a4

l=-1 1 I

(c)Graphical Displaywith KeypadInput (d)TabularOisplsywith KeypadInput(CommendLenoued

Figure 2. Four interfaces for the tactical assessment task,combining levels of semantic distance and engagement.

The command language interface (Figure 2d) has indirectengagement and high semantic distance. This interfaceuses a split visual medium: the tactical assessmentwindow is partitioned into a top portion, which displays atable of target names and attributes, and a bottom portion,which is for subject input and error feedback. Each entry inthe table describes a single target, providing the target’sname (an integer), the target’s class, and continuouslyupdated data about the target. The name of the target class

130

Page 5: Evaluating Two Aspects of Direct Manipulation in …~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer

V CHI’92 Mav3-7, 1992

carries the system designation; initially black, it changes tored, blue, or amber once the system has assigned adesignation. Thetable isdecluttered i.e., itonly presentsthecritical attribute forthe given target class. After thesubject has completed a classify or a confii operation ona target, the system removes the target entry from the tableby scrolling the table. The subject uses a keypad to invokea confirm or classify operation. For each operation, twosequential keypresses are required, one designating hostileor neutral, a second indicating the target number. Forclassify decisions, the subject needs to interpret the data inthe table to determine the appropriate target designation.For confirm decisions, the subject needs to interpret thecolor of the word identifying the target class.

One important difference between the command languageinterface described above and the command languageinterfaces associated with more traditional office systems isthat the table of target data is updated continuously. Suchan approach is dictated in an aircraft context by the impactof external factors on the domain objects (i.e., the targets)and the real-time demands of the tactical domain. Theapproach makes less sense in an office system where, inmost cases, changes to domain objects are made solely bythe user and rapid response times are not as crucial.

The third interface (Figure 2c), the graphical/keypadinte~ace, combines the low semantic distance of the firstinterface with the less direct engagement of the secondinterface. Like the command language interface, thisinterface splits the tactical assessment window into twoportions. The top portion contains the simulated radardisplay; the bottom portion is for subject input and errorfeedback. The subject uses the keypad to enter his classifyand confmn decisions.

Finally, the fourth interface (Figure 2b), the tabzdar/pointerinterface, combines high semantic distance with directselection of the tactical targets on the display using atouchscreen. The subject picks a target by touching theappropriate table entry. He designates the target bytouching either the HOSTILE or NEUTRAL strip at thesides of the display. This last interface is similar to amenu interface, except that the table items are updateddynamically. Scrolling in this interface occurs just afterthe subject completes entry of the confirm or classifydecision and is thus associated with the completion of auser action.

Distance and Engagement In the InterfacesAlthough the four interfaces intuitively represent differentcombinations of semantic distance and engagement, it isimportant to understand the theoretical rationale for thelevel of distance and engagement in each interface.Metaphorically, the direct manipulation interface qresentsa model world of the task domain, the command languageinterface a verbal description. A graphical representationmore closely matches the way that a pilot thinks about thetactical situation. More importantly, these two interfacessupport the user’s goals differently. We distinguish twouser goals: to remain aware of the current tactical

configuration, and to perform the assigned task. The lowdistance display was designed to support both goals. Tosupport the fmt goal, the display continuously provided agraphical representation of the target’s location and how thetarget was moving. To support the second goal, allrelevant information about each target was encapsulated bythis graphical representation.The high distance display was designed to support only thesecond goal, user performance of the assigned task. Indeveloping the high distance display, considerable effortwas required to design a table that effectively supports theassigned task. For example, the target’s spatial coordinates(x, y positions) were not provided because they are notrelevant to the task and would have made the table harder tointeqret, Moreover, the color code indicating the type ofdecision required was shown in the class column only, thusseparating the system-assigned designation from the targetattribute information. Finally, the columns were arrangedto support efficient eye movements.

The levels of engagement can also be considered fromseveral perspectives. We provide a pointing device (i.e., atouchscreen) for high engagement and a keypad for lowengagement. The keypad uses a mode shift for two keys inorder to preserve a common aspect of command languageinterfaces and to avoid introducing direct engagement withlabelled keys for each action and object, a feature thatShneiderman associates with direct manipulation [15].

The theoretical difference between the levels of engagementin the interfaces is based upon the notion of a sharedmedium. In the direct engagement interfaces (the directmanipulation interface and the tabular/pointer interface),both the user and the computer system use a sharedcommunications medium, that is, they both operate on thesame objects. In the direct manipulation interface, theshared medium is the spatial display. The objects to beoperated on are the target symbols and the strips labeled‘HOSTILE’ and ‘NEUTRAL’. In the tabular/pointerinterface, the shared medium is the table, and the objects tobe operated on are the table entries. In both directengagement displays, the objects to be operated on and thestrips sham the same color code. Thus, for example, red ineither the spatial display or the table of target attributesindicates that the subject should select the Srnp with the redwording.

In the indirect engagement interfaces (the commandlanguage interface and the graphical/keypad interface), thecomputer communicates to the user through one medium(i.e., section of the tactical display) and set of objects,while the user communicates to the computer throughanother medium (a keypad and another section of thetactical display) using a different set of objects. Thus thereis a separation of the user input and computer output.

RESULTSWe found considerable support for our hypothesis:automation deficit was least with the direct manipulationinterface and greatest in the interfaces that lacked onecomponent of direct manipulation. We assessed automation

131

Page 6: Evaluating Two Aspects of Direct Manipulation in …~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer

CHI’92 May3-7, 1992

deficit by comparing subject performance on the firstdecision after the tactical assessment task was resumed toperformance on the seventh decision. This effect wassigniilcant in the 12 subjects tested twice, F(3,8) = 11.9, p< .002. Similar results were found in the initial testingwith the larger set of 20 subjects. As shown in Figure 3,with the direct manipulation interface, initial performancewas as good as later performance. In other words, virtuallyno automation deficit was found with the directmanipulation interface. In contras~ automation deficit wasclearly present in the two hybrid interfaces. Laterperformance was improved if either component of directmanipulation was present. This is shown by the reductionin response time for the later response in the two hybridinterfaces. If neither component of direct manipulation waspresent, as in the command language interface, both initialand later performance were poor. Further analysis hassuggested that there may still be a deficit after a minute orso in handling events at a high rate with the commandlanguage interface when the tactical task is completelyautomated [1].

~5000,

I —4000

Response 3000

Time (ins) 2000

1000

0Dlreot GmhioaW Tabular/ Co-d

Manipulation Keypad Pointer Language

Figure 3. Interface effect on automation deficit in responsetime.

We also analyzed the effect of the type of decision and fhetype of display on automation deficit. We found thatautomation deficit was related significantly to theinteraction between the type of decision and the type ofdisplay, F(1,16) = 7.89, p < .02. On classificationdecisions, automation deficit was greater with the tabulardisplays. On confirmation decisions, the deficit was greaterwith the graphical displays. The interaction is bestillustrated by calculating the difference between the fiitresponse and the seventh response (see Figure 4). Thispattern was also seen in the retesting four months later,although it was not as strong.

Looking at all the responses, not just the fiist and a latercomparison response, we found no significant differenceseither in response time or in accuracy between the fourinterfaces. The reason is that responses with the non-dirwtmanipulation interfaces improved during periods when theevent rate was lower (i.e, fourth through sixth targets).Thus the four interfaces supported comparable performancein “normal” operation. In addition, although responsetimes were slower in the retesting four months later, theeffect was not significant and was not related to interface

style. Accuracy was related to the type of decision and thetype of information that had to be interpreted Accuracy forthe confirmation decisions was 95% and for theclassification decisions was 7870. Accuracy was lowest forclassification decisions which depended upon monitoringwhether a number was changing. This occurs when thesubject monitors the bearing of a fighter.

Auto Defcif(ins)

Figure 4.

:Lx2”Confirm Uaseif.

Type of Decision

Automation deficit for different decisions withtwo types of tactical displays

lntra-Task Effects of InterfaceIn a multiple task domain, the interface for one task mighthave effects on other tasks. An interesting intra-task effectof engagement (keypad versus touchscreen) was found whenthe performance on the fracking task was examined. Thoseusing the keypad for the tactical assessment task had bettertracking in the initial phase of resuming the tacticalassessment task than those using the touchscreen. Tounderstand this R.SUIL it is useful to consider touchscreenusage as a form of tracking, and initial performance of thisadditional tracking task may interfere with making requiredadjustments to the other (joystick) tracking task. Thisresult suggests that the touchscreen in the tacticalassessment task induces an automation deficit in thetracking task. This occurs even though the subjects havebeen continually doing the tracking task.

Questionnaire ResultsTwenty-four rating scales were used to obtain subjectivejudgments about feelings of control, feelings of awareness,preferences for the interface, judgments of the difficulty inIeaming and performing the tasks and specific aspects ofthe tasks, and ability to anticipate the changes inautomation. Significant results were found on five scales.The most interesting results were the ratings of ability toanticipate changes in automation and awareness of thetactical situation at the end of automation, Ability toanticipate the changes in automation was dependent uponthe type of tactical display. Those with the graphicaldisplay felt that they were able to anticipate the changesmore often than those with the tabular interface.Furthermore, those with the graphical display felt thatthey were signifkantly more aware of the tactical situationat the end of automation. Debriefing confirmed thatsubjects with the graphicat interface noticed the ebb andflow in activity during automation (i.e., activity picks upjust before the task switches from automatic to manual),but those in the two tabular interfaces did not. This effectoccumd despite the fact that tactical events were appearing

132

Page 7: Evaluating Two Aspects of Direct Manipulation in …~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer

~ CHI’92 May3-7, 1992

in both types of display at the exact same time, that thenumber of items in both is always the same at any

particular momeng and rhat the ebb and flow of activity isexactly the same in each type of display.

These results show that subjects using the graphicaldisplay were able to monitor events during automation.Their ability to anticipate the changes could have producedimproved performance, at least for those who had thegraphical display and the touchscreen. Several otherquestions were asked about activity during automation, andsubjects accurately described some global characteristics ofwhat had occurred during automation (i.e, how manytargets had been present), but not details.

DISCUSSIONOur research has implications for the theory of directmanipulation as well as for the design of interfaces fordynamic, multitask systems. The theoretical implicationsare based upon both empirical results as well asobservations we made during the course of developing theinterfaces and conducting the experiment.

On the positive side, we found that the theoreticalpredictions that we made were generally supported. Thisresult is noteworthy for several reasons. First, thisresearch is a rare example of designing interfaces to test atheory explicitly. Previous studies of direct manipulationand command language interfaces have used interfaces forestablished applications which may not fairly represent thetheoretical concepts. Second, our predictions concern aspecific aspect of performance (automation deficit) in acomplex, multitask situation. Either challenge---specificity of prediction or complexity of context---wouldput demands on a theory. Both were present in thisresearch, which makes the successful predictions of thetheory especially impressive.

However, we also found that the theory has limitations.First, the theory does not address interfaces which include amixture of interface styles and which are probably the rulemore than the exception in complex applications. Thereason is that complex applications involve different typesof tasks. A single interface style may not support all tasksin an optimal manner. In the HHN theory, a generalinterface for the application is assumed. This requireschoosing a representation that is suitable for most tasks.But it may not be optimal for certain tasks. Thuschoosing a single interface style for a complex applicationmay produce suboptimal performance on some aspects ofthe application.

This point is important because it is based not only onobservation but on empirical results. In our data we foundevidence that the optimal display for reducing automationdeficit depends upon the type of decision. Simple deeisionswere served better by the tabular display, complex decisionsby the graphical display. In terms of theoreticalpredictions, the shortcoming of the HHN theory is that it(and we) did not make predictions about the simpledecisions. In retrospect, it is evident that the theory would

have to be modified to address decision complexity. It islikely that the confiiation decisions wete best supported

by the tabular display because the user did not needcomplete information about the object but simply neededto know the value of a single parameter. If the modelworld metaphor is implemented faithfully, then differentrepresentations for different decisions are not directlypossible. Thus an extension of the theory should beconsidered to support different levels of representation fordifferent requirements.Second, we found that the theory does not always help withdetailed aspects of interface design. Our goal was toevaluate interfaces that had different levels of distance andengagement. The iterative design prwess we used forcedmany decisions about details of each of the four interfaces.Many of these decisions were based upon performanceconsiderations and could not be based upon logicalderivations from the tenets of the theory. Furthermore, theperformance constraints were related to the specificapplication. For example, the relative placement of thetwo windows (horizontal or vertical) had an impact on howeasy it was to use hands dedicated to the two tasks. This isa stimulus-response compatibility issue that the theorydoes not address. In essence, the theory is not performancebased as are other formal models such as GOMS. It ismost relevant in dealing with aspects of the interface thatrelate to its cognitive complexity.

Finally, we found that distance and engagement are difllcultterms to define operationally and to evaluate. Ourexperiment required interfaces that combined different levelsof distance and engagement. In other words, these weredesign requirements for the interfaces. One of the problemsis how to distinguish between distance and engagement.Our empirical results suggest that they = not independent,in that the degree of automation deficit in the commandline interface was not a combination of the deficits in thetwo hybrid interfaces, which each lacked an aspect of directmanipulation. HHN themselves point out that engagementis only present when both semantic and articulatordirecmess is present.

The interfaces that we produced represented combinations ofdifferent levels of distance and engagement. What is notclear is how much distance and engagement were actuallypresent. It is apparent that any interface that allows theperson to perform a task successfully has bridged thedistance of the gulfs of execution and evaluation as HHNdiscuss them. The command language interface weproduced supported the user’s goat of performing the taskand therefore reduced semantic distance to a greater degreethan an interface which would not support this goal. Andyet, it did not provide a view of the model world as a pilotwould normally think of it, so considerable distance stillremained. Better precision about the degree of distance andengagement in an interface would be helpful.

CONCLUSIONBased upon our findings, we expect that intermittentoperation of complex tasks will be more effective withdirect manipulation interfaces in a variety of dynamic, real-

133

Page 8: Evaluating Two Aspects of Direct Manipulation in …~ [HI ’92 May3-7, 1992 Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits James A. Ballas, Constance L. Heitmeyer

Cl-n’92 May3-7, 1992

time systems. Although our results were found in acockpit application, extension to other systems isappropriate, particularly systems in which the operator isintermittently moving from one task to another. Toenvision potential generalizations, it is helpful tocharacterize our application in abstract terms. The dualtask application we tested included 1) a continuous taskwith simple perceptual demands, rigorous manual demandsand minor cognitive complexity, and 2) an intermittenttask with varying cognitive and perceptual complexity andminimal manual demands. The cognitive complexity ofthe intermittent task was manipulated by changing theinterfaces and by changing the decisions. The results canbe interpreted at an abstract levek incnmses in the cognitivecomplexity of an interface adversely affect the resumptionof its use after a period of automation. This principlecertainly holds for systems that include the two types oftasks. The principle would probably hold for systemswhich have greater complexity on the continuous task. Infac~ the effects of interface would probably be greater, Thekey to appropriate generalization is that there was relativelylittle cognitive interaction between the two tasks. Therewas some manual interaction as noted below.

Generalization may not be warranted if the system includesmultiple tasks which use similar cognitive processes. In amultitask application, there may be different forms ofexpressions to the various tasks; the interaction of theseexpressions is an important issue. Direct engagement inparticular may introduce incompatibilities, We found thattracking performance was adversely affected in the initialseconds of resuming pointing with the touchscreen. Thecause was an incompatibility between the two forms ofmanual manipulation. The important issue is whetherdirect manipulation interfaces to different tasks couldcompete. According to Wickens [13], the answer is yes,In his resource theory, competition for attentional resourcesoccurs whenever information to the user is’ in similarmodalities or is in a similar code (e.g., spatial or verbal).Competition also occurs whenever responses are similar.Thus two direct manipulation interfaces which both havespatial graphical displays, and which both require pointingdevices could produce competition for attentionai resources.Thus the generalization of our results to other multipletask systems should be made with consideration given topossible competition between aspects of the directmanipulation interface.

ACKNOWLEDGMENTSWe acknowledge Rob Jacob for contributing to the initialidea for the hypothesis and Rob Carter and Diane Damesfor contributions to the design. ‘Many individuals reviewedthe interfaces and contributed to their development. Wethank the subjects for their participation. This researchwas supported by the Office of Naval Technology.

REFERENCES1. Ballas, J. A., Heitmeyer, C. L., and P6rez, M. A.

Direct Manipulation and Intermittent Automation inAdvanced Cockpits. Technical Rep. 9375. NavalResearch Laboratory, Washington, D. C. (in press).

2.

3.

4.

5.

6.

7.

8.

9.

Beltracchi, L. A direct manipulation interface for heatengines based upon the Rankine cycle. IEEETransactions on Systems, Man and Cybernetics. 17(3),pp. 478-487 (1987).Benson, C. R., Govindaraj, T,, Mitchell, C. M. andKrosner, S. P. Effectiveness of direct manipulationinteraction in the supervisory control of FMS partsmovement. Proc. IEEE International Conference onSystems, Man, and Cybernetics. Cambridge, MA, pp.947-952 (NOV. 14-17, 1989).Bemotat, R. K. Man and computer in future on-boardguidance and control systems of aircraft. In B. Shackel(Ed.) Man-computer interactwn: Human factors aspectsof computers and people. Sijthoff & Noordhoff,Rockville, MD (1981).Draper, S. W. Display managers as the basis for user-machine communication. In D. A. Norman & S. W.Draper (Eds.) User-centered system design. ErlbaumAssociates, Hillsdale, NJ, pp. 339-352 (1986).Hutchins, E., Hollan, J. D. and Norman, D. A. Directmanipulation interfaces. in D. A. Norman & S. W.Draper (Eds.) User-centered system design. ErlbaumAssociates, Hillsdale, NJ, pp. 87-124 (1986).Karat, J. Evaluating user interface complexity.Proceedings of the Human Factors Society 31st AnnualMeeting. Human Factors Society, Santa Monica, CA,pp. 566-570 (1987).Kieras, D. E. An overview of human-computerinteraction. Journal of the Washington Aca&my ofSciences. 80(2), pp. 39-70 (1990).Marshak, W. P., Ku~erman, G., Ramsey, E. G., andWilson, D. Situatbn awareness in map displays.Proceedings of the Human Factors Socie~ 31st AnnualMeeting. Human Factors Society, Santa Monica, CAPP. 533-535 (1987).

10. Parasuraman, R,, Bahri, T,, Deaton, J. E,, Morrison,J. G. and Barnes, M. Theory and design of adaptiveautomation in aviation systems. Cognitive ScienceLaboratory, The Catholic University of America,Washington, D. C. (1990).

11. Reising, J. M. and HartSock, D. C. Advancedwarning/caution/advisory displays for fighter aircraft.Proceedings of the Human Factors Society 33rd AnnualMeeting. Human Factors Society, Santa Monica, CA.pp. 66-70 (1989).

12. Whiteside, J., Jones, S., Levy, P. S. and Wixon, D.User performance with command, menu, and iconicinterfaces. Proc. ACM CHI’85 Human Factors inComputing Systems. ACM, New York. pp. 185-191(1985).

13. Wickens, C. D. and Liu, Y. Codes and modalities inmultiple resources: A success and a qualification.Human Factors. 30(5), pp. 599-616 (1988).

14. Wiener, E. L.. Beyond the sterile cockpit. HumanFactors. 27(l), pp. 75-90 (1985).

15. Ziegler, J. E. & Fahnrich, K. P. Direct manipulation.In M. Helender (Ed.) Handbook of human-computerinteraction. Elsevier Science Publishers, North-Holland,pp. 123-133 (1988).

134