woods_roth_1988_cognitive engineering human problem solving(2)

16
HU MAN FACTORS, 1988,30(4),415-430 Cognitive Engineering: Human Problem Solving with Tools D. D. WOODSt and E. M. ROTH,2 Westinghouse Research and Development Center Pittsburgh, Pennsylvania ' C?gnitive engi~~ring is an applied cognitive science that draws on the knowledge and tech- ntqu~ of cog"!ltrve psychology and related disciplines to provide the foundation for princi- ple-dnven desrgn of person-machine systems. This paper examines the fundamental features that cha~acten.ze .c0l!nitive engineering and reviews some of the major issues faced by this nascent mterdrscrplmary field. INTRODUCTION Why is there talk about a field of cognitive engineering? What is cognitive engineering? What can it contribute to the development of more effective person-machine systems? What should it contribute? We will explore some of the answers to these questions in this paper. As with any nascent and interdisci- plinary field, there can be very different per- spectives about what it is and how it will de- velop over time. This paper represents one such view. The same phenomenon has produced both the opportunity and the need for cognitive engineering. With the rapid advances and dramatic reductions in the cost of compu- tational power, computers have become ubiquitous in modern life. In addition to tra- ditional office applications (e.g., word-pro- cessing, accounting, information systems), computers increasingly dominate a broad I Requests for reprints should be sent to David D. ~oods,. Department of Industrial and Systems Engineer- mg, OhIO State University, 1971 Neil Ave. Columbus OH 43210. " zEmilie Roth, Department of Engineering and Public Policy, Carnegie-Mellon University, Pittsburgh, PA 15213. range of work environments (e.g., industrial process control, air traffic control, hospital emergency rooms, robotic factories). The need for cognitive engineering occurs be- cause the introduction of computerization often radically changes the work environ- ment and the cognitive demands placed on the worker. For example, increased automa- tion in process control applications has re- sulted in a shift in the human role from a controller to a supervisor who monitors and manages semiautonomous resources. Al- though this change reduces people's physical workload, mental load often increases as the human role emphasizes monitoring and com- pensating for failures. Thus computerization creates an increasingly larger world of cogni- tive tasks to be performed. More and more we create or design cognitive environments. The opportunity for cognitive engineering arises because computational technology also offers new kinds and degrees of machine power that greatly expand the potential to assist and augment human cognitive activi- ties in complex problem-solving worlds, such as monitoring, problem formulation, plan © 1988, The Human Factors Society, Inc. All rights reserved.

Upload: neville-dastoor

Post on 28-Oct-2015

19 views

Category:

Documents


0 download

DESCRIPTION

Human cognitive systems paper

TRANSCRIPT

HU MAN FACTORS 198830(4)415-430

Cognitive Engineering Human ProblemSolving with Tools

D D WOODSt and E M ROTH2 Westinghouse Research and Development CenterPittsburgh Pennsylvania

Cgnitive engi~~ring is an applied cognitive science that draws on the knowledge and tech-ntqu~ of cogltrve psychology and related disciplines to provide the foundation for princi-ple-dnven desrgn of person-machine systems This paper examines the fundamental featuresthat cha~actenze c0lnitive engineering and reviews some of the major issues faced by thisnascent mterdrscrplmary field

INTRODUCTION

Why is there talk about a field of cognitiveengineering What is cognitive engineeringWhat can it contribute to the development ofmore effective person-machine systemsWhat should it contribute We will exploresome of the answers to these questions in thispaper As with any nascent and interdisci-plinary field there can be very different per-spectives about what it is and how it will de-velop over time This paper represents onesuch view

The same phenomenon has produced boththe opportunity and the need for cognitiveengineering With the rapid advances anddramatic reductions in the cost of compu-tational power computers have becomeubiquitous in modern life In addition to tra-ditional office applications (eg word-pro-cessing accounting information systems)computers increasingly dominate a broad

I Requests for reprints should be sent to David D~oods Department of Industrial and Systems Engineer-mg OhIO State University 1971 Neil Ave Columbus OH43210

zEmilie Roth Department of Engineering and PublicPolicy Carnegie-Mellon University Pittsburgh PA 15213

range of work environments (eg industrialprocess control air traffic control hospitalemergency rooms robotic factories) Theneed for cognitive engineering occurs be-cause the introduction of computerizationoften radically changes the work environ-ment and the cognitive demands placed onthe worker For example increased automa-tion in process control applications has re-sulted in a shift in the human role from acontroller to a supervisor who monitors andmanages semiautonomous resources Al-though this change reduces peoples physicalworkload mental load often increases as thehuman role emphasizes monitoring and com-pensating for failures Thus computerizationcreates an increasingly larger world of cogni-tive tasks to be performed More and morewe create or design cognitive environments

The opportunity for cognitive engineeringarises because computational technologyalso offers new kinds and degrees of machinepower that greatly expand the potential toassist and augment human cognitive activi-ties in complex problem-solving worlds suchas monitoring problem formulation plan

copy 1988 The Human Factors Society Inc All rights reserved

416-August 1988

generation and adaptation and fault man-agement This is highly creative time whenpeople are exploring and testing what can becreated with the new machine power-dis-plays with multiple windows and even rooms(Henderson and Card 1986) The new capa-bilities have led to large amounts of activitydevoted to building new and more powerfultools-how to build better-performing ma-chine problem solvers The question we con-tinue to face is how we should deploy thepower available through new capabilities fortool building to assist human performanceThis question defines the central focus ofcognitive engineering to understand what iseffective support for human problem solvers

The capability to build more powerful ma-chines does not in itself guarantee effectiveperformance as witnessed by early attemptsto develop computerized alarm systems inprocess control (eg Pope 1978) and at-tempts to convert paper-based procedures toa computerized form (eg Elm and Woods1985) The conditions under which the ma-chine will be exercised and the humans rolein problem solving have a profound effect onthe quality of performance This means thatfactors related to tool usage can and shouldaffect the very nature of the tools to be usedThis observation is not new-in actual workcontexts performance breakdowns havebeen observed repeatedly with support sys-tems constructed in a variety of media andtechnologies including current AI tools whenissues of tool use were not considered (seeRoth Bennett and Woods 1987) This is thedark side the capability to do more amplifiesthe potential magnitude of both our suc-cesses and our failures Careful examinationof past shifts in technology reveals that newdifficulties (new types of errors or accidents)are created when the shift in machine powerhas changed the entire human-machine sys-tem in unforeseen ways (eg Hirschhorn

HUMAN FACTORS

1984 Hoogovens Report 1976 Noble 1984Wiener 1985)

Although our ability to build more power-ful machine cognitive systems has grown andbeen promulgated rapidly our ability to un-derstand how to use these capabilities hasnot kept pace Today we can describe cogni-tive tools in terms of the tool-building tech-nologies (eg tiled or overlapping windows)The impediment to systematic provision ofeffective decision support is the lack of an ad-equate cognitive language of description(Clancey 1985 Rasmussen 1986) What arethe cognitive implications of some applica-tions task demands and of the aids and in-terfaces available to the practitioners in thesystem How do people behaveperform inthe cognitive situations defined by these de-mands and tools Because this independentcognitive description has been missing anuneasy mixture of other types of descriptionof a complex situation has been substituted-descriptions in terms of the application it-self (eg internal medicine or power plantthermodynamics) the implementation tech-nology of the interfacesaids the users physi-cal activities

Different kinds of media or technology maybe more powerful than others in that they en-able or enhance certain kinds of cognitivesupport functions Different choices of mediaor technology may also represent trade-offsbetween the kinds of support functions thatare provided to the practitioner The effortrequired to provide a cognitive support func-tion in different kinds of media or technologymay also vary In any case performance aid-ing requires that one focus at the level of thecognitive support functions required andthen at the level of what technology can pro-vide those functions or how those functionscan be crafted within a given computationaltechnology

This view of cognitive technology as com-

COGNITIVE ENGINEERING

plementary to computational technology isin stark contrast to another view wherebycognitive engineering is a necessary butbothersome step to acquire the knowledgefuel necessary to run the computational en-gines of today and tomorrow

COGNITIVE ENGINEERING IS

There has been growing recognition of thisneed to develop an applied cognitive sciencethat draws on knowledge and techniques ofcognitive psychology and related disciplinesto provide the basis for principle-driven de-sign (Brown and Newman 1985 Newell andCard 1985 Norman 1981 Norman andDraper 1986 Rasmussen 1986) In this sec-tion we will examine some of the characteris-tics of cognitive engineering (or whateverlabel you prefer-cognitive technologiescognitive factors cognitive ergonomicsknowledge engineering) The specific per-spective for this exposition is that of cognitivesystems engineering (Hollnagel and Woods1983 Woods 1986)

about Complex Worlds

Cognitive engineering is about human be-havior in complex worlds Studying humanbehavior in complex worlds (and designingsupport systems) is one case of people en-gaged in problem solving in a complex worldanalogous to the task of other human prob-lem solvers (eg operators troubleshooters)who confront complexity in the course oftheir daily tasks Not surprisingly the strate-gies researchers and designers use to copewith complexity are similar as well For ex-ample a standard tactic to manage complex-ity is to bound the world under consider-ation Thus one might address only a singletime slice of a dynamic process or only a sub-set of the interconnections among parts of ahighly coupled world This strategy is limitedbecause it is not clear whether the relevant

August 1988-417

aspects of the whole have been capturedFirst parts of the problem-solving processmay be missed or their importance underes-timated second some aspects of problemsolving may emerge only when more com-plex situations are directly examined

For example the role of problem formula-tion and reformulation in effective perfor-mance is often overlooked Reducing thecomplexity of design or research questions bybounding the world to be considered merelydisplaces the complexity to the person in theoperational world rather than providing astrategy to cope with the true complexity ofthe actual problem-solving context It is onemajor source of failure in the design of ma-chine problem solvers For example the de-signer of a machine problem solver may as-sume that only one failure is possible to beable to completely enumerate possible solu-tions and to make use of classification prob-lem-solving techniques (Clancey 1985) How-ever the actual problem solver must copewith the possibility of multiple failures mis-leading signals and interacting disturbances(eg Pople 1985 Woods and Roth 1986)

The result is that we need particularly inthis time of advancing machine power to un-derstand human behavior in complex situa-tions What makes problem solving complexHow does complexity affect the performanceof human and machine problem solversHow can problem-solving performance incomplex worlds be improved and deficien-cies avoided Understanding the factors thatproduce complexity the cognitive demandsthat they create and some of the cognitivefailure forms that emerge when these de-mands are not met is essential if advances inmachine power are to lead to new cognitivetools that actually enhance problem-solvingperformance (See Dorner 1983 FischhoffLanir and Johnson 1986 Klein in pressMontmollin and De Keyser 1985 Rasmus-

418-August 1988

sen 1986 Selfridge Rissland and Arbib1984 for other discussions on the nature ofcomplexity in problem solving)

Ecological

Cognitive engineering is ecological It isabout multidimensional open worlds andnot about the artificially bounded closedworlds typical of the laboratory or the engi-neers desktop (eg Coombs and Hartley1987 Funder 1987) Of course virtually allof the work environments that we might beinterested in are man-made The point is thatthese worlds encompass more than the de-sign intent-they exist in the world as natu-ral problem-solving habitats

An example of the ecological perspective isthe need to study humans solving problemswith tools (ie support systems) as opposedto laboratory research that continues for themost part to examine human performancestripped of any tools How to put effective cog-nitive tools into the hands of practitioners isthe sine qua non for cognitive engineeringFrom this viewpoint quite a lot could belearned from examining the nature of thetools that people spontaneously create towork more effectively in some problem-solv-ing environment or examining how preexist-ing mechanisms are adapted to serve as toolsas occurred in Roth et al (1987) or examin-ing how tools provided for a practitioner arereally put to use by practitioners The studiesby De Keyser (eg 1986) are extremelyunique with respect to the latter

In reducing the target world to a tractablelaboratory or desktop world in search of pre-cise results we run the risk of eliminating thecritical features of the world that drive be-havior This creates the problem of decidingwhat counts as an effective stimulus (as Gib-son has pointed out in ecological perception)or to use an alternative terminology decid-

HUMAN FACTORS

ing what counts for a symbol To decide thisquestion Gibson (1979) and Dennett (1982)among others have pointed out the need fora semantic and pragmatic analysis of envi-ronment-cognitive agent relationships withrespect to the goalsresources of the agentand the demandsconstraints in the environ-ment As a result one has to pay very closeattention to what people actually do in aproblem-solving world given the actual de-mands that they face (Woods Roth andPople 1987) Principle-driven design of sup-port systems begins wi th understandingwhat are the difficult aspects of a problem-solving situation (eg Rasmussen 1986Woods and Hollnagel 1987)

about the Semantics of a Domain

A corollary to the foregoing points is thatcognitive engineering must address the con-tents or semantics of a domain (eg Coombs1986 Coombs and Hartley 1987) Purelysyntactic and exclusively tool-driven ap-proaches to develop support systems are vul-nerable to the error of the third kind solvingthe wrong problem The danger is to fall intothe psychologists fallacy of William James(1890) whereby the psychologists reality isconfused with the psychological reality of thehuman practitioner in his or her problem-solving world To guard against this dangerthe psychologist or cognitive engineer muststart with the working assumption that prac-titioner behavior is reasonable and attemptto understand how this behavior copes withthe demands and constraints imposed by theproblem-solving world in question For ex-ample the introduction of computerizedalarm systems into power plant controlrooms inadvertently so badly underminedthe strategies operational personnel used tocope with some problem-solving demandsthat the systems had to be removed and theprevious alarm system restored (Pope 1978)

COGNITIVE ENGINEERING

The question is not why people failed to ac-cept a useful technology but rather how theoriginal alarm system supported and the newsystem failed to support operator strategiesto cope with the worlds problem-solving de-mands This is not to say that the strategiesdeveloped to cope with the original task de-mands are always optimal or even that theyalways produce acceptable levels of perfor-mance but only that understanding howthey function in the initial cognitive environ-ment is a starting point to develop truly ef-fective support systems (eg Roth andWoods 1988)

Semantic approaches on the other handare vulnerable to myopia If each world isseen as completely unique and must be in-vestigated tabula rasa then cognitive engi-neering can be no more than a set of tech-niques that are used to investigate everyworld anew If this were the case it wouldimpose strong practical constraints on prin-ciple-driven development of support systemsrestricting it to cases in which the conse-quences of poor performance are extremelyhigh

To achieve relevance to specific worlds andgeneralizability across worlds the cognitivelanguage must be able to escape the languageof particular worlds as well as the languageof particular computational mechanismsand identify pragmatic reasoning situationsafter Cheng and Holyoak (1985) and ChengHolyoak Nisbett and Oliver (1986) Thesereasoning situations are abstract relative tothe language of the particular application inquestion and therefore transportable acrossworlds but they are also pragmatic in thesense that the reasoning involves knowledgeof the things being reasoned about More am-bitious are attempts to build a formal cogni-tive language-for example that by Coombsand Hartley (1987) through their work on co-herence in model generative reasoning

August 1988-419

about Improved Performance

Cognitive engineering is not merely aboutthe contents of a world it is about changingbehaviorperformance in that world This isboth a practical consideration-improvingperformance or reducing errors justifies theinvestment from the point of view of theworld in question-and a theoretical consid-eration-the ability to produce practicalchanges in performance is the cri terion fordemonstrating an understanding of the fac-tors involved Basic concepts are confirmedonly when they generate treatments (aidingeither on-line or off-line) that make a differ-ence in the target world Chengs conceptsabout human deductive reasoning (Chengand Holyoak 1985 Cheng et aI 1986) gener-ated treatments that produced very largeperformance changes both absolutely andrelative to the history of rather ineffectual al-ternative treatments to human biases in de-ductive reasoning

To achieve the goal of enhanced perfor-mance cognitive engineering must first iden-tify the sources of error that impair the per-formance of the current problem-solvingsystem This means that there is a need forcognitive engineering to understand wherehow and why machine human and human-plus-machine problem solving breaks downin natural problem-solving habitats

Buggy knowledge-missing incompleteor erroneous knowledge-is one source oferror (eg Brown and Burton 1978 Brownand VanLehn 1980 Gentner and Stevens1983) The buggy knowledge approach pro-vides a specification of the knowledge struc-tures (eg incomplete or erroneous knowl-edge) that are postulated to produce thepattern of errors and correct responses thatcharacterize the performance of particularindividuals The specification is typically em-bodied as a computer program and consti-

420-August 1988

tutes a theory of what these individualsknow (including misconceptions) Humanperformance aiding then focuses on provid-ing missing knowledge and correcting theknowledge bugs From the point of view ofcomputational power more knowledge andmore correct knowledge can be embodiedand delivered in the form of a rule-based ex-pert system following a knowledge acquisi-tion phase that determines the fine-graineddomain knowledge

But the more critical question for effectivehuman performance may be how knowledgeis activated and utilized in the actual prob-lem-solving environment (e g BransfordSherwood Vye and Rieser 1986 Cheng etaI 1986) The question concerns not merelywhether the problem solver knows some par-ticular piece of domain knowledge such asthe relationship between two entities Doeshe or she know that it is relevant to the prob-lem at hand and does he or she know how toutilize this knowledge in problem solvingStudies of education and training often showthat students successfully acquire knowledgethat is potentially relevant to solving domainproblems but that they often fail to exhibitskilled performance-for example differ-ences in solving mathematical exercisesversus word problems (see Gentner and Ste-vens 1983 for examples)

The fact that people possess relevantknowledge does not guarantee that thatknowledge will be activated and utilizedwhen needed in the actual problem-solvingenvironment This is the issue of expression ofknowledge Education and training tend toassume that if a person can be shown to pos-sess a piece of knowledge in any circum-stance then this knowledge should be acces-sible under all conditions in which it mightbe useful In contrast a variety of researchhas revealed dissociation effects wherebyknowledge accessed in one context remains

HUMAN FACTORS

inert in another (Bransford et aI 1986Cheng et aI 1986 Perkins and Martin 1986)For example Gick and Holyoak (1980) foundthat unless explicitly prompted people willfail to apply a recently learned problem-solu-tion strategy to an isomorphic problem (seealso Kotovsky Hayes and Simon 1985)Thus the fact that people possess relevantknowledge does not guarantee that thisknowledge will be activated when neededThe critical question is not to show that theproblem solver possesses domain knowledgebut rather the more stringent criterion thatsituation-relevant knowledge is accessibleunder the conditions in which the task is per-formed This has been called the problem ofinert knowledge-knowledge that is accessedonly in a restricted set of contexts

The general conclusion of studies on theproblem of inert knowledge is that possessionof the relevant domain knowledge or strate-gies by themselves is not sufficient to ensurethat this knowledge will be accessed in newcontexts Off-line training experiences needto promote an understanding of how con-cepts and procedures can function as tools forsolving relevant problems (Bransford et aI1986 Brown Bransford Ferrara and Cam-pione 1983 Brown and Campione 1986)Training has to be about more than simplystudent knowledge acquisition it must alsoenhance the expression of knowledge by con-ditionalizing knowledge to its use via infor-mation about triggering conditions andconstraints (Glaser 1984)

Similarly on-line representations of theworld can help or hinder problem solvers inrecognizing what information or strategiesare relevant to the problem at hand (Woods1986) For example Fischhoff Slovic andLichtenstein (1979) and Kruglanski Fried-land and Farkash (1984) found that judg-mental biases (eg representativeness) weregreatly reduced or eliminated when aspects

COGNITIVE ENGINEERING

of the situation cued the relevance of statisti-cal information and reasoning Thus one di-mension along which representations vary istheir ability to provide prompts to the knowl-edge relevant in a given context

The challenge for cognitive engineering isto study and develop ways to enhance the ex-pression of knowledge and to avoid inertknowledge What training content and expe-riences are necessary to develop condi tion-alized knowledge (Glaser 1984 Lesh 1987Perkins and Martin 1986) What representa-tions cue people about the knowledge that isrelevant to the current context of goals sys-tem state and practitioner intentions (Wie-cha and Henrion 1987 Woods and Roth1988)

about Systems

Cognitive engineering is about systems Onesource of tremendous confusion has been aninability to clearly define the systems of in-terest From one point of view the computerprogram being executed is the end applica-tion of concern In this case one often speaksof the interface the tasks performed withinthe syntax of the interface and human usersof the interface Notice that the applicationworld (what the interface is used for) isdeemphasized The bulk of work on human-computer interaction takes this perspectiveIssues of concern include designing for learn-ability (eg Brown and Newman 1985 Car-roll and Carrithers 1984 Kieras and Polson1985) and designing for ease and pleasureab-leness of use (Malone 1983 Norman 1983Shneiderman 1986)

A second perspective is to distinguish theinterface from the application world (Hollna-gel Mancini and Woods 1986 ManciniWoods and Hollnagel in press Miyata andNorman 1986 Rasmussen 1986 Stefik etaI 1985) For example text-editing tasks areperformed only in some larger context such

August 1988-421

as transcription data entry and composi-tion The interface is an external representa-tion of an application world that is a me-dium through which agents come to knowand act on the world-troubleshooting elec-tronic devices (Davis 1983) logistic mainte-nance systems managing data communica-tion networks managing power distributionnetworks medical diagnosis (Cohen et aI1987 Gadd and Pople 1987) aircraft andhelicopter flight decks (Pew et aI 1986) airtraffic control systems process control acci-dent response (Woods Roth and Pople1987) and command and control of a battle-field (eg Fischhoff et aI 1986) Tasks areproperties of the world in question althoughperformance of these fundamental tasks (iedemands) is affected by the design of the ex-ternal representation (eg Mitchell andSaisi 1987) The human is not a passive userof a computer program but is an active prob-lem-solver in some world Therefore we willgenerally refer to people as domain agents oractors or problem solvers and not as users

In part the difference in the foregoingviews can be traced to differences in the cog-nitive complexity of the domain task beingsupported Research on person-computer in-teraction has typically dealt with office ap-plications (eg word processors for docu-ment preparation or copying machines forduplicating material) in which the goals tobe accomplished (eg replace word 1 withword 2) and the steps required to accomplishthem are relatively straightforward Theseapplications fall at one extreme of the cogni-tive complexity space In contrast there aremany decision-making and supervisory envi-ronments (eg military situation assess-ment medical diagnosis) in which problemformulation situation assessment goal defi-nition plan generation and plan monitoringand adaptation are significantly more com-plex It is in designing interfaces and aids for

422-August 1988

these applications that it is essential to dis-tinguish the world to be acted on from theinterface or window on the world (how onecomes to know that world) and from agentswho can act directly or indirectly on theworld

The system of interest in design shouldnot be the machine problem solver per senor should the focus of interest in evaluationbe the performance of the machine problemsolver alone Ultimately the focus must bethe design and the performance of thehuman-machine problem-solving ensemble-how to couple human intelligence andmachine power in a single integrated systemthat maximizes overall performance

about Multiple Cognitive Agents

A large number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on theworld in question (eg command and con-trol process control data communicationnetworks) Not only do we need to be clearabout where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the worldwe also need to be clear about the differentagents who can act directly or indirectly onthe world Cognitive engineering must be ableto address systems with multiple cognitiveagents This applies to multiple human cog-nitive systems (often called distributeddecision making) eg (Fischhoff 1986Fischhoff et ai 1986 March and Weisinger-Baylon 1986 Rochlin La Porte and Rob-erts in press Schum 1980)

Because of the expansions in computa-tional powers the machine element can bethought of as a partially autonomous cogni-tive agent in its own right This raises theproblem of how to build a cognitive systemthat combines both human and machine cog-

HUMAN FACTORS

nitive systems or in other words joint cogni-tive systems (Hollnagel Mancini and Woods1986 Mancini et aI in press) When a systemincludes these machine agents the humanrole is not eliminated but shifted This meansthat changes in automation are changes inthe joint human-machine cognitive systemand the design goal is to maximize overallperformance

One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is toexamine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (egCoombs and Alty 1984) Following this meta-phor leads Muir (1987) to raise the questionof the role of trust between man and ma-chine in effective performance One provoca-tive question that Muirs analysis generatesis how does the level of trust between humanand machine problem solvers affect perfor-mance The practitioners judgment of ma-chine competence or predictability can bemiscalibrated leading to excessive trust ormistrust Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will deferto the machine even in areas that challengeor exceed the machines range of competence

Another question concerns how trust is es-tablished between human and machineTrust or mistrust is based on cumulative ex-perience with the other agent that providesevidence about enduring characteristics ofthe agent such as competence and predict-ability This means that factors about hownew technology is introduced into the workenvironment can playa critical role in build-ing or undermining trust in the machineproblem solver If this stage of technology in-troduction is mishandled (for example prac-titioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

416-August 1988

generation and adaptation and fault man-agement This is highly creative time whenpeople are exploring and testing what can becreated with the new machine power-dis-plays with multiple windows and even rooms(Henderson and Card 1986) The new capa-bilities have led to large amounts of activitydevoted to building new and more powerfultools-how to build better-performing ma-chine problem solvers The question we con-tinue to face is how we should deploy thepower available through new capabilities fortool building to assist human performanceThis question defines the central focus ofcognitive engineering to understand what iseffective support for human problem solvers

The capability to build more powerful ma-chines does not in itself guarantee effectiveperformance as witnessed by early attemptsto develop computerized alarm systems inprocess control (eg Pope 1978) and at-tempts to convert paper-based procedures toa computerized form (eg Elm and Woods1985) The conditions under which the ma-chine will be exercised and the humans rolein problem solving have a profound effect onthe quality of performance This means thatfactors related to tool usage can and shouldaffect the very nature of the tools to be usedThis observation is not new-in actual workcontexts performance breakdowns havebeen observed repeatedly with support sys-tems constructed in a variety of media andtechnologies including current AI tools whenissues of tool use were not considered (seeRoth Bennett and Woods 1987) This is thedark side the capability to do more amplifiesthe potential magnitude of both our suc-cesses and our failures Careful examinationof past shifts in technology reveals that newdifficulties (new types of errors or accidents)are created when the shift in machine powerhas changed the entire human-machine sys-tem in unforeseen ways (eg Hirschhorn

HUMAN FACTORS

1984 Hoogovens Report 1976 Noble 1984Wiener 1985)

Although our ability to build more power-ful machine cognitive systems has grown andbeen promulgated rapidly our ability to un-derstand how to use these capabilities hasnot kept pace Today we can describe cogni-tive tools in terms of the tool-building tech-nologies (eg tiled or overlapping windows)The impediment to systematic provision ofeffective decision support is the lack of an ad-equate cognitive language of description(Clancey 1985 Rasmussen 1986) What arethe cognitive implications of some applica-tions task demands and of the aids and in-terfaces available to the practitioners in thesystem How do people behaveperform inthe cognitive situations defined by these de-mands and tools Because this independentcognitive description has been missing anuneasy mixture of other types of descriptionof a complex situation has been substituted-descriptions in terms of the application it-self (eg internal medicine or power plantthermodynamics) the implementation tech-nology of the interfacesaids the users physi-cal activities

Different kinds of media or technology maybe more powerful than others in that they en-able or enhance certain kinds of cognitivesupport functions Different choices of mediaor technology may also represent trade-offsbetween the kinds of support functions thatare provided to the practitioner The effortrequired to provide a cognitive support func-tion in different kinds of media or technologymay also vary In any case performance aid-ing requires that one focus at the level of thecognitive support functions required andthen at the level of what technology can pro-vide those functions or how those functionscan be crafted within a given computationaltechnology

This view of cognitive technology as com-

COGNITIVE ENGINEERING

plementary to computational technology isin stark contrast to another view wherebycognitive engineering is a necessary butbothersome step to acquire the knowledgefuel necessary to run the computational en-gines of today and tomorrow

COGNITIVE ENGINEERING IS

There has been growing recognition of thisneed to develop an applied cognitive sciencethat draws on knowledge and techniques ofcognitive psychology and related disciplinesto provide the basis for principle-driven de-sign (Brown and Newman 1985 Newell andCard 1985 Norman 1981 Norman andDraper 1986 Rasmussen 1986) In this sec-tion we will examine some of the characteris-tics of cognitive engineering (or whateverlabel you prefer-cognitive technologiescognitive factors cognitive ergonomicsknowledge engineering) The specific per-spective for this exposition is that of cognitivesystems engineering (Hollnagel and Woods1983 Woods 1986)

about Complex Worlds

Cognitive engineering is about human be-havior in complex worlds Studying humanbehavior in complex worlds (and designingsupport systems) is one case of people en-gaged in problem solving in a complex worldanalogous to the task of other human prob-lem solvers (eg operators troubleshooters)who confront complexity in the course oftheir daily tasks Not surprisingly the strate-gies researchers and designers use to copewith complexity are similar as well For ex-ample a standard tactic to manage complex-ity is to bound the world under consider-ation Thus one might address only a singletime slice of a dynamic process or only a sub-set of the interconnections among parts of ahighly coupled world This strategy is limitedbecause it is not clear whether the relevant

August 1988-417

aspects of the whole have been capturedFirst parts of the problem-solving processmay be missed or their importance underes-timated second some aspects of problemsolving may emerge only when more com-plex situations are directly examined

For example the role of problem formula-tion and reformulation in effective perfor-mance is often overlooked Reducing thecomplexity of design or research questions bybounding the world to be considered merelydisplaces the complexity to the person in theoperational world rather than providing astrategy to cope with the true complexity ofthe actual problem-solving context It is onemajor source of failure in the design of ma-chine problem solvers For example the de-signer of a machine problem solver may as-sume that only one failure is possible to beable to completely enumerate possible solu-tions and to make use of classification prob-lem-solving techniques (Clancey 1985) How-ever the actual problem solver must copewith the possibility of multiple failures mis-leading signals and interacting disturbances(eg Pople 1985 Woods and Roth 1986)

The result is that we need particularly inthis time of advancing machine power to un-derstand human behavior in complex situa-tions What makes problem solving complexHow does complexity affect the performanceof human and machine problem solversHow can problem-solving performance incomplex worlds be improved and deficien-cies avoided Understanding the factors thatproduce complexity the cognitive demandsthat they create and some of the cognitivefailure forms that emerge when these de-mands are not met is essential if advances inmachine power are to lead to new cognitivetools that actually enhance problem-solvingperformance (See Dorner 1983 FischhoffLanir and Johnson 1986 Klein in pressMontmollin and De Keyser 1985 Rasmus-

418-August 1988

sen 1986 Selfridge Rissland and Arbib1984 for other discussions on the nature ofcomplexity in problem solving)

Ecological

Cognitive engineering is ecological It isabout multidimensional open worlds andnot about the artificially bounded closedworlds typical of the laboratory or the engi-neers desktop (eg Coombs and Hartley1987 Funder 1987) Of course virtually allof the work environments that we might beinterested in are man-made The point is thatthese worlds encompass more than the de-sign intent-they exist in the world as natu-ral problem-solving habitats

An example of the ecological perspective isthe need to study humans solving problemswith tools (ie support systems) as opposedto laboratory research that continues for themost part to examine human performancestripped of any tools How to put effective cog-nitive tools into the hands of practitioners isthe sine qua non for cognitive engineeringFrom this viewpoint quite a lot could belearned from examining the nature of thetools that people spontaneously create towork more effectively in some problem-solv-ing environment or examining how preexist-ing mechanisms are adapted to serve as toolsas occurred in Roth et al (1987) or examin-ing how tools provided for a practitioner arereally put to use by practitioners The studiesby De Keyser (eg 1986) are extremelyunique with respect to the latter

In reducing the target world to a tractablelaboratory or desktop world in search of pre-cise results we run the risk of eliminating thecritical features of the world that drive be-havior This creates the problem of decidingwhat counts as an effective stimulus (as Gib-son has pointed out in ecological perception)or to use an alternative terminology decid-

HUMAN FACTORS

ing what counts for a symbol To decide thisquestion Gibson (1979) and Dennett (1982)among others have pointed out the need fora semantic and pragmatic analysis of envi-ronment-cognitive agent relationships withrespect to the goalsresources of the agentand the demandsconstraints in the environ-ment As a result one has to pay very closeattention to what people actually do in aproblem-solving world given the actual de-mands that they face (Woods Roth andPople 1987) Principle-driven design of sup-port systems begins wi th understandingwhat are the difficult aspects of a problem-solving situation (eg Rasmussen 1986Woods and Hollnagel 1987)

about the Semantics of a Domain

A corollary to the foregoing points is thatcognitive engineering must address the con-tents or semantics of a domain (eg Coombs1986 Coombs and Hartley 1987) Purelysyntactic and exclusively tool-driven ap-proaches to develop support systems are vul-nerable to the error of the third kind solvingthe wrong problem The danger is to fall intothe psychologists fallacy of William James(1890) whereby the psychologists reality isconfused with the psychological reality of thehuman practitioner in his or her problem-solving world To guard against this dangerthe psychologist or cognitive engineer muststart with the working assumption that prac-titioner behavior is reasonable and attemptto understand how this behavior copes withthe demands and constraints imposed by theproblem-solving world in question For ex-ample the introduction of computerizedalarm systems into power plant controlrooms inadvertently so badly underminedthe strategies operational personnel used tocope with some problem-solving demandsthat the systems had to be removed and theprevious alarm system restored (Pope 1978)

COGNITIVE ENGINEERING

The question is not why people failed to ac-cept a useful technology but rather how theoriginal alarm system supported and the newsystem failed to support operator strategiesto cope with the worlds problem-solving de-mands This is not to say that the strategiesdeveloped to cope with the original task de-mands are always optimal or even that theyalways produce acceptable levels of perfor-mance but only that understanding howthey function in the initial cognitive environ-ment is a starting point to develop truly ef-fective support systems (eg Roth andWoods 1988)

Semantic approaches on the other handare vulnerable to myopia If each world isseen as completely unique and must be in-vestigated tabula rasa then cognitive engi-neering can be no more than a set of tech-niques that are used to investigate everyworld anew If this were the case it wouldimpose strong practical constraints on prin-ciple-driven development of support systemsrestricting it to cases in which the conse-quences of poor performance are extremelyhigh

To achieve relevance to specific worlds andgeneralizability across worlds the cognitivelanguage must be able to escape the languageof particular worlds as well as the languageof particular computational mechanismsand identify pragmatic reasoning situationsafter Cheng and Holyoak (1985) and ChengHolyoak Nisbett and Oliver (1986) Thesereasoning situations are abstract relative tothe language of the particular application inquestion and therefore transportable acrossworlds but they are also pragmatic in thesense that the reasoning involves knowledgeof the things being reasoned about More am-bitious are attempts to build a formal cogni-tive language-for example that by Coombsand Hartley (1987) through their work on co-herence in model generative reasoning

August 1988-419

about Improved Performance

Cognitive engineering is not merely aboutthe contents of a world it is about changingbehaviorperformance in that world This isboth a practical consideration-improvingperformance or reducing errors justifies theinvestment from the point of view of theworld in question-and a theoretical consid-eration-the ability to produce practicalchanges in performance is the cri terion fordemonstrating an understanding of the fac-tors involved Basic concepts are confirmedonly when they generate treatments (aidingeither on-line or off-line) that make a differ-ence in the target world Chengs conceptsabout human deductive reasoning (Chengand Holyoak 1985 Cheng et aI 1986) gener-ated treatments that produced very largeperformance changes both absolutely andrelative to the history of rather ineffectual al-ternative treatments to human biases in de-ductive reasoning

To achieve the goal of enhanced perfor-mance cognitive engineering must first iden-tify the sources of error that impair the per-formance of the current problem-solvingsystem This means that there is a need forcognitive engineering to understand wherehow and why machine human and human-plus-machine problem solving breaks downin natural problem-solving habitats

Buggy knowledge-missing incompleteor erroneous knowledge-is one source oferror (eg Brown and Burton 1978 Brownand VanLehn 1980 Gentner and Stevens1983) The buggy knowledge approach pro-vides a specification of the knowledge struc-tures (eg incomplete or erroneous knowl-edge) that are postulated to produce thepattern of errors and correct responses thatcharacterize the performance of particularindividuals The specification is typically em-bodied as a computer program and consti-

420-August 1988

tutes a theory of what these individualsknow (including misconceptions) Humanperformance aiding then focuses on provid-ing missing knowledge and correcting theknowledge bugs From the point of view ofcomputational power more knowledge andmore correct knowledge can be embodiedand delivered in the form of a rule-based ex-pert system following a knowledge acquisi-tion phase that determines the fine-graineddomain knowledge

But the more critical question for effectivehuman performance may be how knowledgeis activated and utilized in the actual prob-lem-solving environment (e g BransfordSherwood Vye and Rieser 1986 Cheng etaI 1986) The question concerns not merelywhether the problem solver knows some par-ticular piece of domain knowledge such asthe relationship between two entities Doeshe or she know that it is relevant to the prob-lem at hand and does he or she know how toutilize this knowledge in problem solvingStudies of education and training often showthat students successfully acquire knowledgethat is potentially relevant to solving domainproblems but that they often fail to exhibitskilled performance-for example differ-ences in solving mathematical exercisesversus word problems (see Gentner and Ste-vens 1983 for examples)

The fact that people possess relevantknowledge does not guarantee that thatknowledge will be activated and utilizedwhen needed in the actual problem-solvingenvironment This is the issue of expression ofknowledge Education and training tend toassume that if a person can be shown to pos-sess a piece of knowledge in any circum-stance then this knowledge should be acces-sible under all conditions in which it mightbe useful In contrast a variety of researchhas revealed dissociation effects wherebyknowledge accessed in one context remains

HUMAN FACTORS

inert in another (Bransford et aI 1986Cheng et aI 1986 Perkins and Martin 1986)For example Gick and Holyoak (1980) foundthat unless explicitly prompted people willfail to apply a recently learned problem-solu-tion strategy to an isomorphic problem (seealso Kotovsky Hayes and Simon 1985)Thus the fact that people possess relevantknowledge does not guarantee that thisknowledge will be activated when neededThe critical question is not to show that theproblem solver possesses domain knowledgebut rather the more stringent criterion thatsituation-relevant knowledge is accessibleunder the conditions in which the task is per-formed This has been called the problem ofinert knowledge-knowledge that is accessedonly in a restricted set of contexts

The general conclusion of studies on theproblem of inert knowledge is that possessionof the relevant domain knowledge or strate-gies by themselves is not sufficient to ensurethat this knowledge will be accessed in newcontexts Off-line training experiences needto promote an understanding of how con-cepts and procedures can function as tools forsolving relevant problems (Bransford et aI1986 Brown Bransford Ferrara and Cam-pione 1983 Brown and Campione 1986)Training has to be about more than simplystudent knowledge acquisition it must alsoenhance the expression of knowledge by con-ditionalizing knowledge to its use via infor-mation about triggering conditions andconstraints (Glaser 1984)

Similarly on-line representations of theworld can help or hinder problem solvers inrecognizing what information or strategiesare relevant to the problem at hand (Woods1986) For example Fischhoff Slovic andLichtenstein (1979) and Kruglanski Fried-land and Farkash (1984) found that judg-mental biases (eg representativeness) weregreatly reduced or eliminated when aspects

COGNITIVE ENGINEERING

of the situation cued the relevance of statisti-cal information and reasoning Thus one di-mension along which representations vary istheir ability to provide prompts to the knowl-edge relevant in a given context

The challenge for cognitive engineering isto study and develop ways to enhance the ex-pression of knowledge and to avoid inertknowledge What training content and expe-riences are necessary to develop condi tion-alized knowledge (Glaser 1984 Lesh 1987Perkins and Martin 1986) What representa-tions cue people about the knowledge that isrelevant to the current context of goals sys-tem state and practitioner intentions (Wie-cha and Henrion 1987 Woods and Roth1988)

about Systems

Cognitive engineering is about systems Onesource of tremendous confusion has been aninability to clearly define the systems of in-terest From one point of view the computerprogram being executed is the end applica-tion of concern In this case one often speaksof the interface the tasks performed withinthe syntax of the interface and human usersof the interface Notice that the applicationworld (what the interface is used for) isdeemphasized The bulk of work on human-computer interaction takes this perspectiveIssues of concern include designing for learn-ability (eg Brown and Newman 1985 Car-roll and Carrithers 1984 Kieras and Polson1985) and designing for ease and pleasureab-leness of use (Malone 1983 Norman 1983Shneiderman 1986)

A second perspective is to distinguish theinterface from the application world (Hollna-gel Mancini and Woods 1986 ManciniWoods and Hollnagel in press Miyata andNorman 1986 Rasmussen 1986 Stefik etaI 1985) For example text-editing tasks areperformed only in some larger context such

August 1988-421

as transcription data entry and composi-tion The interface is an external representa-tion of an application world that is a me-dium through which agents come to knowand act on the world-troubleshooting elec-tronic devices (Davis 1983) logistic mainte-nance systems managing data communica-tion networks managing power distributionnetworks medical diagnosis (Cohen et aI1987 Gadd and Pople 1987) aircraft andhelicopter flight decks (Pew et aI 1986) airtraffic control systems process control acci-dent response (Woods Roth and Pople1987) and command and control of a battle-field (eg Fischhoff et aI 1986) Tasks areproperties of the world in question althoughperformance of these fundamental tasks (iedemands) is affected by the design of the ex-ternal representation (eg Mitchell andSaisi 1987) The human is not a passive userof a computer program but is an active prob-lem-solver in some world Therefore we willgenerally refer to people as domain agents oractors or problem solvers and not as users

In part the difference in the foregoingviews can be traced to differences in the cog-nitive complexity of the domain task beingsupported Research on person-computer in-teraction has typically dealt with office ap-plications (eg word processors for docu-ment preparation or copying machines forduplicating material) in which the goals tobe accomplished (eg replace word 1 withword 2) and the steps required to accomplishthem are relatively straightforward Theseapplications fall at one extreme of the cogni-tive complexity space In contrast there aremany decision-making and supervisory envi-ronments (eg military situation assess-ment medical diagnosis) in which problemformulation situation assessment goal defi-nition plan generation and plan monitoringand adaptation are significantly more com-plex It is in designing interfaces and aids for

422-August 1988

these applications that it is essential to dis-tinguish the world to be acted on from theinterface or window on the world (how onecomes to know that world) and from agentswho can act directly or indirectly on theworld

The system of interest in design shouldnot be the machine problem solver per senor should the focus of interest in evaluationbe the performance of the machine problemsolver alone Ultimately the focus must bethe design and the performance of thehuman-machine problem-solving ensemble-how to couple human intelligence andmachine power in a single integrated systemthat maximizes overall performance

about Multiple Cognitive Agents

A large number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on theworld in question (eg command and con-trol process control data communicationnetworks) Not only do we need to be clearabout where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the worldwe also need to be clear about the differentagents who can act directly or indirectly onthe world Cognitive engineering must be ableto address systems with multiple cognitiveagents This applies to multiple human cog-nitive systems (often called distributeddecision making) eg (Fischhoff 1986Fischhoff et ai 1986 March and Weisinger-Baylon 1986 Rochlin La Porte and Rob-erts in press Schum 1980)

Because of the expansions in computa-tional powers the machine element can bethought of as a partially autonomous cogni-tive agent in its own right This raises theproblem of how to build a cognitive systemthat combines both human and machine cog-

HUMAN FACTORS

nitive systems or in other words joint cogni-tive systems (Hollnagel Mancini and Woods1986 Mancini et aI in press) When a systemincludes these machine agents the humanrole is not eliminated but shifted This meansthat changes in automation are changes inthe joint human-machine cognitive systemand the design goal is to maximize overallperformance

One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is toexamine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (egCoombs and Alty 1984) Following this meta-phor leads Muir (1987) to raise the questionof the role of trust between man and ma-chine in effective performance One provoca-tive question that Muirs analysis generatesis how does the level of trust between humanand machine problem solvers affect perfor-mance The practitioners judgment of ma-chine competence or predictability can bemiscalibrated leading to excessive trust ormistrust Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will deferto the machine even in areas that challengeor exceed the machines range of competence

Another question concerns how trust is es-tablished between human and machineTrust or mistrust is based on cumulative ex-perience with the other agent that providesevidence about enduring characteristics ofthe agent such as competence and predict-ability This means that factors about hownew technology is introduced into the workenvironment can playa critical role in build-ing or undermining trust in the machineproblem solver If this stage of technology in-troduction is mishandled (for example prac-titioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

COGNITIVE ENGINEERING

plementary to computational technology isin stark contrast to another view wherebycognitive engineering is a necessary butbothersome step to acquire the knowledgefuel necessary to run the computational en-gines of today and tomorrow

COGNITIVE ENGINEERING IS

There has been growing recognition of thisneed to develop an applied cognitive sciencethat draws on knowledge and techniques ofcognitive psychology and related disciplinesto provide the basis for principle-driven de-sign (Brown and Newman 1985 Newell andCard 1985 Norman 1981 Norman andDraper 1986 Rasmussen 1986) In this sec-tion we will examine some of the characteris-tics of cognitive engineering (or whateverlabel you prefer-cognitive technologiescognitive factors cognitive ergonomicsknowledge engineering) The specific per-spective for this exposition is that of cognitivesystems engineering (Hollnagel and Woods1983 Woods 1986)

about Complex Worlds

Cognitive engineering is about human be-havior in complex worlds Studying humanbehavior in complex worlds (and designingsupport systems) is one case of people en-gaged in problem solving in a complex worldanalogous to the task of other human prob-lem solvers (eg operators troubleshooters)who confront complexity in the course oftheir daily tasks Not surprisingly the strate-gies researchers and designers use to copewith complexity are similar as well For ex-ample a standard tactic to manage complex-ity is to bound the world under consider-ation Thus one might address only a singletime slice of a dynamic process or only a sub-set of the interconnections among parts of ahighly coupled world This strategy is limitedbecause it is not clear whether the relevant

August 1988-417

aspects of the whole have been capturedFirst parts of the problem-solving processmay be missed or their importance underes-timated second some aspects of problemsolving may emerge only when more com-plex situations are directly examined

For example the role of problem formula-tion and reformulation in effective perfor-mance is often overlooked Reducing thecomplexity of design or research questions bybounding the world to be considered merelydisplaces the complexity to the person in theoperational world rather than providing astrategy to cope with the true complexity ofthe actual problem-solving context It is onemajor source of failure in the design of ma-chine problem solvers For example the de-signer of a machine problem solver may as-sume that only one failure is possible to beable to completely enumerate possible solu-tions and to make use of classification prob-lem-solving techniques (Clancey 1985) How-ever the actual problem solver must copewith the possibility of multiple failures mis-leading signals and interacting disturbances(eg Pople 1985 Woods and Roth 1986)

The result is that we need particularly inthis time of advancing machine power to un-derstand human behavior in complex situa-tions What makes problem solving complexHow does complexity affect the performanceof human and machine problem solversHow can problem-solving performance incomplex worlds be improved and deficien-cies avoided Understanding the factors thatproduce complexity the cognitive demandsthat they create and some of the cognitivefailure forms that emerge when these de-mands are not met is essential if advances inmachine power are to lead to new cognitivetools that actually enhance problem-solvingperformance (See Dorner 1983 FischhoffLanir and Johnson 1986 Klein in pressMontmollin and De Keyser 1985 Rasmus-

418-August 1988

sen 1986 Selfridge Rissland and Arbib1984 for other discussions on the nature ofcomplexity in problem solving)

Ecological

Cognitive engineering is ecological It isabout multidimensional open worlds andnot about the artificially bounded closedworlds typical of the laboratory or the engi-neers desktop (eg Coombs and Hartley1987 Funder 1987) Of course virtually allof the work environments that we might beinterested in are man-made The point is thatthese worlds encompass more than the de-sign intent-they exist in the world as natu-ral problem-solving habitats

An example of the ecological perspective isthe need to study humans solving problemswith tools (ie support systems) as opposedto laboratory research that continues for themost part to examine human performancestripped of any tools How to put effective cog-nitive tools into the hands of practitioners isthe sine qua non for cognitive engineeringFrom this viewpoint quite a lot could belearned from examining the nature of thetools that people spontaneously create towork more effectively in some problem-solv-ing environment or examining how preexist-ing mechanisms are adapted to serve as toolsas occurred in Roth et al (1987) or examin-ing how tools provided for a practitioner arereally put to use by practitioners The studiesby De Keyser (eg 1986) are extremelyunique with respect to the latter

In reducing the target world to a tractablelaboratory or desktop world in search of pre-cise results we run the risk of eliminating thecritical features of the world that drive be-havior This creates the problem of decidingwhat counts as an effective stimulus (as Gib-son has pointed out in ecological perception)or to use an alternative terminology decid-

HUMAN FACTORS

ing what counts for a symbol To decide thisquestion Gibson (1979) and Dennett (1982)among others have pointed out the need fora semantic and pragmatic analysis of envi-ronment-cognitive agent relationships withrespect to the goalsresources of the agentand the demandsconstraints in the environ-ment As a result one has to pay very closeattention to what people actually do in aproblem-solving world given the actual de-mands that they face (Woods Roth andPople 1987) Principle-driven design of sup-port systems begins wi th understandingwhat are the difficult aspects of a problem-solving situation (eg Rasmussen 1986Woods and Hollnagel 1987)

about the Semantics of a Domain

A corollary to the foregoing points is thatcognitive engineering must address the con-tents or semantics of a domain (eg Coombs1986 Coombs and Hartley 1987) Purelysyntactic and exclusively tool-driven ap-proaches to develop support systems are vul-nerable to the error of the third kind solvingthe wrong problem The danger is to fall intothe psychologists fallacy of William James(1890) whereby the psychologists reality isconfused with the psychological reality of thehuman practitioner in his or her problem-solving world To guard against this dangerthe psychologist or cognitive engineer muststart with the working assumption that prac-titioner behavior is reasonable and attemptto understand how this behavior copes withthe demands and constraints imposed by theproblem-solving world in question For ex-ample the introduction of computerizedalarm systems into power plant controlrooms inadvertently so badly underminedthe strategies operational personnel used tocope with some problem-solving demandsthat the systems had to be removed and theprevious alarm system restored (Pope 1978)

COGNITIVE ENGINEERING

The question is not why people failed to ac-cept a useful technology but rather how theoriginal alarm system supported and the newsystem failed to support operator strategiesto cope with the worlds problem-solving de-mands This is not to say that the strategiesdeveloped to cope with the original task de-mands are always optimal or even that theyalways produce acceptable levels of perfor-mance but only that understanding howthey function in the initial cognitive environ-ment is a starting point to develop truly ef-fective support systems (eg Roth andWoods 1988)

Semantic approaches on the other handare vulnerable to myopia If each world isseen as completely unique and must be in-vestigated tabula rasa then cognitive engi-neering can be no more than a set of tech-niques that are used to investigate everyworld anew If this were the case it wouldimpose strong practical constraints on prin-ciple-driven development of support systemsrestricting it to cases in which the conse-quences of poor performance are extremelyhigh

To achieve relevance to specific worlds andgeneralizability across worlds the cognitivelanguage must be able to escape the languageof particular worlds as well as the languageof particular computational mechanismsand identify pragmatic reasoning situationsafter Cheng and Holyoak (1985) and ChengHolyoak Nisbett and Oliver (1986) Thesereasoning situations are abstract relative tothe language of the particular application inquestion and therefore transportable acrossworlds but they are also pragmatic in thesense that the reasoning involves knowledgeof the things being reasoned about More am-bitious are attempts to build a formal cogni-tive language-for example that by Coombsand Hartley (1987) through their work on co-herence in model generative reasoning

August 1988-419

about Improved Performance

Cognitive engineering is not merely aboutthe contents of a world it is about changingbehaviorperformance in that world This isboth a practical consideration-improvingperformance or reducing errors justifies theinvestment from the point of view of theworld in question-and a theoretical consid-eration-the ability to produce practicalchanges in performance is the cri terion fordemonstrating an understanding of the fac-tors involved Basic concepts are confirmedonly when they generate treatments (aidingeither on-line or off-line) that make a differ-ence in the target world Chengs conceptsabout human deductive reasoning (Chengand Holyoak 1985 Cheng et aI 1986) gener-ated treatments that produced very largeperformance changes both absolutely andrelative to the history of rather ineffectual al-ternative treatments to human biases in de-ductive reasoning

To achieve the goal of enhanced perfor-mance cognitive engineering must first iden-tify the sources of error that impair the per-formance of the current problem-solvingsystem This means that there is a need forcognitive engineering to understand wherehow and why machine human and human-plus-machine problem solving breaks downin natural problem-solving habitats

Buggy knowledge-missing incompleteor erroneous knowledge-is one source oferror (eg Brown and Burton 1978 Brownand VanLehn 1980 Gentner and Stevens1983) The buggy knowledge approach pro-vides a specification of the knowledge struc-tures (eg incomplete or erroneous knowl-edge) that are postulated to produce thepattern of errors and correct responses thatcharacterize the performance of particularindividuals The specification is typically em-bodied as a computer program and consti-

420-August 1988

tutes a theory of what these individualsknow (including misconceptions) Humanperformance aiding then focuses on provid-ing missing knowledge and correcting theknowledge bugs From the point of view ofcomputational power more knowledge andmore correct knowledge can be embodiedand delivered in the form of a rule-based ex-pert system following a knowledge acquisi-tion phase that determines the fine-graineddomain knowledge

But the more critical question for effectivehuman performance may be how knowledgeis activated and utilized in the actual prob-lem-solving environment (e g BransfordSherwood Vye and Rieser 1986 Cheng etaI 1986) The question concerns not merelywhether the problem solver knows some par-ticular piece of domain knowledge such asthe relationship between two entities Doeshe or she know that it is relevant to the prob-lem at hand and does he or she know how toutilize this knowledge in problem solvingStudies of education and training often showthat students successfully acquire knowledgethat is potentially relevant to solving domainproblems but that they often fail to exhibitskilled performance-for example differ-ences in solving mathematical exercisesversus word problems (see Gentner and Ste-vens 1983 for examples)

The fact that people possess relevantknowledge does not guarantee that thatknowledge will be activated and utilizedwhen needed in the actual problem-solvingenvironment This is the issue of expression ofknowledge Education and training tend toassume that if a person can be shown to pos-sess a piece of knowledge in any circum-stance then this knowledge should be acces-sible under all conditions in which it mightbe useful In contrast a variety of researchhas revealed dissociation effects wherebyknowledge accessed in one context remains

HUMAN FACTORS

inert in another (Bransford et aI 1986Cheng et aI 1986 Perkins and Martin 1986)For example Gick and Holyoak (1980) foundthat unless explicitly prompted people willfail to apply a recently learned problem-solu-tion strategy to an isomorphic problem (seealso Kotovsky Hayes and Simon 1985)Thus the fact that people possess relevantknowledge does not guarantee that thisknowledge will be activated when neededThe critical question is not to show that theproblem solver possesses domain knowledgebut rather the more stringent criterion thatsituation-relevant knowledge is accessibleunder the conditions in which the task is per-formed This has been called the problem ofinert knowledge-knowledge that is accessedonly in a restricted set of contexts

The general conclusion of studies on theproblem of inert knowledge is that possessionof the relevant domain knowledge or strate-gies by themselves is not sufficient to ensurethat this knowledge will be accessed in newcontexts Off-line training experiences needto promote an understanding of how con-cepts and procedures can function as tools forsolving relevant problems (Bransford et aI1986 Brown Bransford Ferrara and Cam-pione 1983 Brown and Campione 1986)Training has to be about more than simplystudent knowledge acquisition it must alsoenhance the expression of knowledge by con-ditionalizing knowledge to its use via infor-mation about triggering conditions andconstraints (Glaser 1984)

Similarly on-line representations of theworld can help or hinder problem solvers inrecognizing what information or strategiesare relevant to the problem at hand (Woods1986) For example Fischhoff Slovic andLichtenstein (1979) and Kruglanski Fried-land and Farkash (1984) found that judg-mental biases (eg representativeness) weregreatly reduced or eliminated when aspects

COGNITIVE ENGINEERING

of the situation cued the relevance of statisti-cal information and reasoning Thus one di-mension along which representations vary istheir ability to provide prompts to the knowl-edge relevant in a given context

The challenge for cognitive engineering isto study and develop ways to enhance the ex-pression of knowledge and to avoid inertknowledge What training content and expe-riences are necessary to develop condi tion-alized knowledge (Glaser 1984 Lesh 1987Perkins and Martin 1986) What representa-tions cue people about the knowledge that isrelevant to the current context of goals sys-tem state and practitioner intentions (Wie-cha and Henrion 1987 Woods and Roth1988)

about Systems

Cognitive engineering is about systems Onesource of tremendous confusion has been aninability to clearly define the systems of in-terest From one point of view the computerprogram being executed is the end applica-tion of concern In this case one often speaksof the interface the tasks performed withinthe syntax of the interface and human usersof the interface Notice that the applicationworld (what the interface is used for) isdeemphasized The bulk of work on human-computer interaction takes this perspectiveIssues of concern include designing for learn-ability (eg Brown and Newman 1985 Car-roll and Carrithers 1984 Kieras and Polson1985) and designing for ease and pleasureab-leness of use (Malone 1983 Norman 1983Shneiderman 1986)

A second perspective is to distinguish theinterface from the application world (Hollna-gel Mancini and Woods 1986 ManciniWoods and Hollnagel in press Miyata andNorman 1986 Rasmussen 1986 Stefik etaI 1985) For example text-editing tasks areperformed only in some larger context such

August 1988-421

as transcription data entry and composi-tion The interface is an external representa-tion of an application world that is a me-dium through which agents come to knowand act on the world-troubleshooting elec-tronic devices (Davis 1983) logistic mainte-nance systems managing data communica-tion networks managing power distributionnetworks medical diagnosis (Cohen et aI1987 Gadd and Pople 1987) aircraft andhelicopter flight decks (Pew et aI 1986) airtraffic control systems process control acci-dent response (Woods Roth and Pople1987) and command and control of a battle-field (eg Fischhoff et aI 1986) Tasks areproperties of the world in question althoughperformance of these fundamental tasks (iedemands) is affected by the design of the ex-ternal representation (eg Mitchell andSaisi 1987) The human is not a passive userof a computer program but is an active prob-lem-solver in some world Therefore we willgenerally refer to people as domain agents oractors or problem solvers and not as users

In part the difference in the foregoingviews can be traced to differences in the cog-nitive complexity of the domain task beingsupported Research on person-computer in-teraction has typically dealt with office ap-plications (eg word processors for docu-ment preparation or copying machines forduplicating material) in which the goals tobe accomplished (eg replace word 1 withword 2) and the steps required to accomplishthem are relatively straightforward Theseapplications fall at one extreme of the cogni-tive complexity space In contrast there aremany decision-making and supervisory envi-ronments (eg military situation assess-ment medical diagnosis) in which problemformulation situation assessment goal defi-nition plan generation and plan monitoringand adaptation are significantly more com-plex It is in designing interfaces and aids for

422-August 1988

these applications that it is essential to dis-tinguish the world to be acted on from theinterface or window on the world (how onecomes to know that world) and from agentswho can act directly or indirectly on theworld

The system of interest in design shouldnot be the machine problem solver per senor should the focus of interest in evaluationbe the performance of the machine problemsolver alone Ultimately the focus must bethe design and the performance of thehuman-machine problem-solving ensemble-how to couple human intelligence andmachine power in a single integrated systemthat maximizes overall performance

about Multiple Cognitive Agents

A large number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on theworld in question (eg command and con-trol process control data communicationnetworks) Not only do we need to be clearabout where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the worldwe also need to be clear about the differentagents who can act directly or indirectly onthe world Cognitive engineering must be ableto address systems with multiple cognitiveagents This applies to multiple human cog-nitive systems (often called distributeddecision making) eg (Fischhoff 1986Fischhoff et ai 1986 March and Weisinger-Baylon 1986 Rochlin La Porte and Rob-erts in press Schum 1980)

Because of the expansions in computa-tional powers the machine element can bethought of as a partially autonomous cogni-tive agent in its own right This raises theproblem of how to build a cognitive systemthat combines both human and machine cog-

HUMAN FACTORS

nitive systems or in other words joint cogni-tive systems (Hollnagel Mancini and Woods1986 Mancini et aI in press) When a systemincludes these machine agents the humanrole is not eliminated but shifted This meansthat changes in automation are changes inthe joint human-machine cognitive systemand the design goal is to maximize overallperformance

One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is toexamine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (egCoombs and Alty 1984) Following this meta-phor leads Muir (1987) to raise the questionof the role of trust between man and ma-chine in effective performance One provoca-tive question that Muirs analysis generatesis how does the level of trust between humanand machine problem solvers affect perfor-mance The practitioners judgment of ma-chine competence or predictability can bemiscalibrated leading to excessive trust ormistrust Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will deferto the machine even in areas that challengeor exceed the machines range of competence

Another question concerns how trust is es-tablished between human and machineTrust or mistrust is based on cumulative ex-perience with the other agent that providesevidence about enduring characteristics ofthe agent such as competence and predict-ability This means that factors about hownew technology is introduced into the workenvironment can playa critical role in build-ing or undermining trust in the machineproblem solver If this stage of technology in-troduction is mishandled (for example prac-titioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

418-August 1988

sen 1986 Selfridge Rissland and Arbib1984 for other discussions on the nature ofcomplexity in problem solving)

Ecological

Cognitive engineering is ecological It isabout multidimensional open worlds andnot about the artificially bounded closedworlds typical of the laboratory or the engi-neers desktop (eg Coombs and Hartley1987 Funder 1987) Of course virtually allof the work environments that we might beinterested in are man-made The point is thatthese worlds encompass more than the de-sign intent-they exist in the world as natu-ral problem-solving habitats

An example of the ecological perspective isthe need to study humans solving problemswith tools (ie support systems) as opposedto laboratory research that continues for themost part to examine human performancestripped of any tools How to put effective cog-nitive tools into the hands of practitioners isthe sine qua non for cognitive engineeringFrom this viewpoint quite a lot could belearned from examining the nature of thetools that people spontaneously create towork more effectively in some problem-solv-ing environment or examining how preexist-ing mechanisms are adapted to serve as toolsas occurred in Roth et al (1987) or examin-ing how tools provided for a practitioner arereally put to use by practitioners The studiesby De Keyser (eg 1986) are extremelyunique with respect to the latter

In reducing the target world to a tractablelaboratory or desktop world in search of pre-cise results we run the risk of eliminating thecritical features of the world that drive be-havior This creates the problem of decidingwhat counts as an effective stimulus (as Gib-son has pointed out in ecological perception)or to use an alternative terminology decid-

HUMAN FACTORS

ing what counts for a symbol To decide thisquestion Gibson (1979) and Dennett (1982)among others have pointed out the need fora semantic and pragmatic analysis of envi-ronment-cognitive agent relationships withrespect to the goalsresources of the agentand the demandsconstraints in the environ-ment As a result one has to pay very closeattention to what people actually do in aproblem-solving world given the actual de-mands that they face (Woods Roth andPople 1987) Principle-driven design of sup-port systems begins wi th understandingwhat are the difficult aspects of a problem-solving situation (eg Rasmussen 1986Woods and Hollnagel 1987)

about the Semantics of a Domain

A corollary to the foregoing points is thatcognitive engineering must address the con-tents or semantics of a domain (eg Coombs1986 Coombs and Hartley 1987) Purelysyntactic and exclusively tool-driven ap-proaches to develop support systems are vul-nerable to the error of the third kind solvingthe wrong problem The danger is to fall intothe psychologists fallacy of William James(1890) whereby the psychologists reality isconfused with the psychological reality of thehuman practitioner in his or her problem-solving world To guard against this dangerthe psychologist or cognitive engineer muststart with the working assumption that prac-titioner behavior is reasonable and attemptto understand how this behavior copes withthe demands and constraints imposed by theproblem-solving world in question For ex-ample the introduction of computerizedalarm systems into power plant controlrooms inadvertently so badly underminedthe strategies operational personnel used tocope with some problem-solving demandsthat the systems had to be removed and theprevious alarm system restored (Pope 1978)

COGNITIVE ENGINEERING

The question is not why people failed to ac-cept a useful technology but rather how theoriginal alarm system supported and the newsystem failed to support operator strategiesto cope with the worlds problem-solving de-mands This is not to say that the strategiesdeveloped to cope with the original task de-mands are always optimal or even that theyalways produce acceptable levels of perfor-mance but only that understanding howthey function in the initial cognitive environ-ment is a starting point to develop truly ef-fective support systems (eg Roth andWoods 1988)

Semantic approaches on the other handare vulnerable to myopia If each world isseen as completely unique and must be in-vestigated tabula rasa then cognitive engi-neering can be no more than a set of tech-niques that are used to investigate everyworld anew If this were the case it wouldimpose strong practical constraints on prin-ciple-driven development of support systemsrestricting it to cases in which the conse-quences of poor performance are extremelyhigh

To achieve relevance to specific worlds andgeneralizability across worlds the cognitivelanguage must be able to escape the languageof particular worlds as well as the languageof particular computational mechanismsand identify pragmatic reasoning situationsafter Cheng and Holyoak (1985) and ChengHolyoak Nisbett and Oliver (1986) Thesereasoning situations are abstract relative tothe language of the particular application inquestion and therefore transportable acrossworlds but they are also pragmatic in thesense that the reasoning involves knowledgeof the things being reasoned about More am-bitious are attempts to build a formal cogni-tive language-for example that by Coombsand Hartley (1987) through their work on co-herence in model generative reasoning

August 1988-419

about Improved Performance

Cognitive engineering is not merely aboutthe contents of a world it is about changingbehaviorperformance in that world This isboth a practical consideration-improvingperformance or reducing errors justifies theinvestment from the point of view of theworld in question-and a theoretical consid-eration-the ability to produce practicalchanges in performance is the cri terion fordemonstrating an understanding of the fac-tors involved Basic concepts are confirmedonly when they generate treatments (aidingeither on-line or off-line) that make a differ-ence in the target world Chengs conceptsabout human deductive reasoning (Chengand Holyoak 1985 Cheng et aI 1986) gener-ated treatments that produced very largeperformance changes both absolutely andrelative to the history of rather ineffectual al-ternative treatments to human biases in de-ductive reasoning

To achieve the goal of enhanced perfor-mance cognitive engineering must first iden-tify the sources of error that impair the per-formance of the current problem-solvingsystem This means that there is a need forcognitive engineering to understand wherehow and why machine human and human-plus-machine problem solving breaks downin natural problem-solving habitats

Buggy knowledge-missing incompleteor erroneous knowledge-is one source oferror (eg Brown and Burton 1978 Brownand VanLehn 1980 Gentner and Stevens1983) The buggy knowledge approach pro-vides a specification of the knowledge struc-tures (eg incomplete or erroneous knowl-edge) that are postulated to produce thepattern of errors and correct responses thatcharacterize the performance of particularindividuals The specification is typically em-bodied as a computer program and consti-

420-August 1988

tutes a theory of what these individualsknow (including misconceptions) Humanperformance aiding then focuses on provid-ing missing knowledge and correcting theknowledge bugs From the point of view ofcomputational power more knowledge andmore correct knowledge can be embodiedand delivered in the form of a rule-based ex-pert system following a knowledge acquisi-tion phase that determines the fine-graineddomain knowledge

But the more critical question for effectivehuman performance may be how knowledgeis activated and utilized in the actual prob-lem-solving environment (e g BransfordSherwood Vye and Rieser 1986 Cheng etaI 1986) The question concerns not merelywhether the problem solver knows some par-ticular piece of domain knowledge such asthe relationship between two entities Doeshe or she know that it is relevant to the prob-lem at hand and does he or she know how toutilize this knowledge in problem solvingStudies of education and training often showthat students successfully acquire knowledgethat is potentially relevant to solving domainproblems but that they often fail to exhibitskilled performance-for example differ-ences in solving mathematical exercisesversus word problems (see Gentner and Ste-vens 1983 for examples)

The fact that people possess relevantknowledge does not guarantee that thatknowledge will be activated and utilizedwhen needed in the actual problem-solvingenvironment This is the issue of expression ofknowledge Education and training tend toassume that if a person can be shown to pos-sess a piece of knowledge in any circum-stance then this knowledge should be acces-sible under all conditions in which it mightbe useful In contrast a variety of researchhas revealed dissociation effects wherebyknowledge accessed in one context remains

HUMAN FACTORS

inert in another (Bransford et aI 1986Cheng et aI 1986 Perkins and Martin 1986)For example Gick and Holyoak (1980) foundthat unless explicitly prompted people willfail to apply a recently learned problem-solu-tion strategy to an isomorphic problem (seealso Kotovsky Hayes and Simon 1985)Thus the fact that people possess relevantknowledge does not guarantee that thisknowledge will be activated when neededThe critical question is not to show that theproblem solver possesses domain knowledgebut rather the more stringent criterion thatsituation-relevant knowledge is accessibleunder the conditions in which the task is per-formed This has been called the problem ofinert knowledge-knowledge that is accessedonly in a restricted set of contexts

The general conclusion of studies on theproblem of inert knowledge is that possessionof the relevant domain knowledge or strate-gies by themselves is not sufficient to ensurethat this knowledge will be accessed in newcontexts Off-line training experiences needto promote an understanding of how con-cepts and procedures can function as tools forsolving relevant problems (Bransford et aI1986 Brown Bransford Ferrara and Cam-pione 1983 Brown and Campione 1986)Training has to be about more than simplystudent knowledge acquisition it must alsoenhance the expression of knowledge by con-ditionalizing knowledge to its use via infor-mation about triggering conditions andconstraints (Glaser 1984)

Similarly on-line representations of theworld can help or hinder problem solvers inrecognizing what information or strategiesare relevant to the problem at hand (Woods1986) For example Fischhoff Slovic andLichtenstein (1979) and Kruglanski Fried-land and Farkash (1984) found that judg-mental biases (eg representativeness) weregreatly reduced or eliminated when aspects

COGNITIVE ENGINEERING

of the situation cued the relevance of statisti-cal information and reasoning Thus one di-mension along which representations vary istheir ability to provide prompts to the knowl-edge relevant in a given context

The challenge for cognitive engineering isto study and develop ways to enhance the ex-pression of knowledge and to avoid inertknowledge What training content and expe-riences are necessary to develop condi tion-alized knowledge (Glaser 1984 Lesh 1987Perkins and Martin 1986) What representa-tions cue people about the knowledge that isrelevant to the current context of goals sys-tem state and practitioner intentions (Wie-cha and Henrion 1987 Woods and Roth1988)

about Systems

Cognitive engineering is about systems Onesource of tremendous confusion has been aninability to clearly define the systems of in-terest From one point of view the computerprogram being executed is the end applica-tion of concern In this case one often speaksof the interface the tasks performed withinthe syntax of the interface and human usersof the interface Notice that the applicationworld (what the interface is used for) isdeemphasized The bulk of work on human-computer interaction takes this perspectiveIssues of concern include designing for learn-ability (eg Brown and Newman 1985 Car-roll and Carrithers 1984 Kieras and Polson1985) and designing for ease and pleasureab-leness of use (Malone 1983 Norman 1983Shneiderman 1986)

A second perspective is to distinguish theinterface from the application world (Hollna-gel Mancini and Woods 1986 ManciniWoods and Hollnagel in press Miyata andNorman 1986 Rasmussen 1986 Stefik etaI 1985) For example text-editing tasks areperformed only in some larger context such

August 1988-421

as transcription data entry and composi-tion The interface is an external representa-tion of an application world that is a me-dium through which agents come to knowand act on the world-troubleshooting elec-tronic devices (Davis 1983) logistic mainte-nance systems managing data communica-tion networks managing power distributionnetworks medical diagnosis (Cohen et aI1987 Gadd and Pople 1987) aircraft andhelicopter flight decks (Pew et aI 1986) airtraffic control systems process control acci-dent response (Woods Roth and Pople1987) and command and control of a battle-field (eg Fischhoff et aI 1986) Tasks areproperties of the world in question althoughperformance of these fundamental tasks (iedemands) is affected by the design of the ex-ternal representation (eg Mitchell andSaisi 1987) The human is not a passive userof a computer program but is an active prob-lem-solver in some world Therefore we willgenerally refer to people as domain agents oractors or problem solvers and not as users

In part the difference in the foregoingviews can be traced to differences in the cog-nitive complexity of the domain task beingsupported Research on person-computer in-teraction has typically dealt with office ap-plications (eg word processors for docu-ment preparation or copying machines forduplicating material) in which the goals tobe accomplished (eg replace word 1 withword 2) and the steps required to accomplishthem are relatively straightforward Theseapplications fall at one extreme of the cogni-tive complexity space In contrast there aremany decision-making and supervisory envi-ronments (eg military situation assess-ment medical diagnosis) in which problemformulation situation assessment goal defi-nition plan generation and plan monitoringand adaptation are significantly more com-plex It is in designing interfaces and aids for

422-August 1988

these applications that it is essential to dis-tinguish the world to be acted on from theinterface or window on the world (how onecomes to know that world) and from agentswho can act directly or indirectly on theworld

The system of interest in design shouldnot be the machine problem solver per senor should the focus of interest in evaluationbe the performance of the machine problemsolver alone Ultimately the focus must bethe design and the performance of thehuman-machine problem-solving ensemble-how to couple human intelligence andmachine power in a single integrated systemthat maximizes overall performance

about Multiple Cognitive Agents

A large number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on theworld in question (eg command and con-trol process control data communicationnetworks) Not only do we need to be clearabout where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the worldwe also need to be clear about the differentagents who can act directly or indirectly onthe world Cognitive engineering must be ableto address systems with multiple cognitiveagents This applies to multiple human cog-nitive systems (often called distributeddecision making) eg (Fischhoff 1986Fischhoff et ai 1986 March and Weisinger-Baylon 1986 Rochlin La Porte and Rob-erts in press Schum 1980)

Because of the expansions in computa-tional powers the machine element can bethought of as a partially autonomous cogni-tive agent in its own right This raises theproblem of how to build a cognitive systemthat combines both human and machine cog-

HUMAN FACTORS

nitive systems or in other words joint cogni-tive systems (Hollnagel Mancini and Woods1986 Mancini et aI in press) When a systemincludes these machine agents the humanrole is not eliminated but shifted This meansthat changes in automation are changes inthe joint human-machine cognitive systemand the design goal is to maximize overallperformance

One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is toexamine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (egCoombs and Alty 1984) Following this meta-phor leads Muir (1987) to raise the questionof the role of trust between man and ma-chine in effective performance One provoca-tive question that Muirs analysis generatesis how does the level of trust between humanand machine problem solvers affect perfor-mance The practitioners judgment of ma-chine competence or predictability can bemiscalibrated leading to excessive trust ormistrust Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will deferto the machine even in areas that challengeor exceed the machines range of competence

Another question concerns how trust is es-tablished between human and machineTrust or mistrust is based on cumulative ex-perience with the other agent that providesevidence about enduring characteristics ofthe agent such as competence and predict-ability This means that factors about hownew technology is introduced into the workenvironment can playa critical role in build-ing or undermining trust in the machineproblem solver If this stage of technology in-troduction is mishandled (for example prac-titioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

COGNITIVE ENGINEERING

The question is not why people failed to ac-cept a useful technology but rather how theoriginal alarm system supported and the newsystem failed to support operator strategiesto cope with the worlds problem-solving de-mands This is not to say that the strategiesdeveloped to cope with the original task de-mands are always optimal or even that theyalways produce acceptable levels of perfor-mance but only that understanding howthey function in the initial cognitive environ-ment is a starting point to develop truly ef-fective support systems (eg Roth andWoods 1988)

Semantic approaches on the other handare vulnerable to myopia If each world isseen as completely unique and must be in-vestigated tabula rasa then cognitive engi-neering can be no more than a set of tech-niques that are used to investigate everyworld anew If this were the case it wouldimpose strong practical constraints on prin-ciple-driven development of support systemsrestricting it to cases in which the conse-quences of poor performance are extremelyhigh

To achieve relevance to specific worlds andgeneralizability across worlds the cognitivelanguage must be able to escape the languageof particular worlds as well as the languageof particular computational mechanismsand identify pragmatic reasoning situationsafter Cheng and Holyoak (1985) and ChengHolyoak Nisbett and Oliver (1986) Thesereasoning situations are abstract relative tothe language of the particular application inquestion and therefore transportable acrossworlds but they are also pragmatic in thesense that the reasoning involves knowledgeof the things being reasoned about More am-bitious are attempts to build a formal cogni-tive language-for example that by Coombsand Hartley (1987) through their work on co-herence in model generative reasoning

August 1988-419

about Improved Performance

Cognitive engineering is not merely aboutthe contents of a world it is about changingbehaviorperformance in that world This isboth a practical consideration-improvingperformance or reducing errors justifies theinvestment from the point of view of theworld in question-and a theoretical consid-eration-the ability to produce practicalchanges in performance is the cri terion fordemonstrating an understanding of the fac-tors involved Basic concepts are confirmedonly when they generate treatments (aidingeither on-line or off-line) that make a differ-ence in the target world Chengs conceptsabout human deductive reasoning (Chengand Holyoak 1985 Cheng et aI 1986) gener-ated treatments that produced very largeperformance changes both absolutely andrelative to the history of rather ineffectual al-ternative treatments to human biases in de-ductive reasoning

To achieve the goal of enhanced perfor-mance cognitive engineering must first iden-tify the sources of error that impair the per-formance of the current problem-solvingsystem This means that there is a need forcognitive engineering to understand wherehow and why machine human and human-plus-machine problem solving breaks downin natural problem-solving habitats

Buggy knowledge-missing incompleteor erroneous knowledge-is one source oferror (eg Brown and Burton 1978 Brownand VanLehn 1980 Gentner and Stevens1983) The buggy knowledge approach pro-vides a specification of the knowledge struc-tures (eg incomplete or erroneous knowl-edge) that are postulated to produce thepattern of errors and correct responses thatcharacterize the performance of particularindividuals The specification is typically em-bodied as a computer program and consti-

420-August 1988

tutes a theory of what these individualsknow (including misconceptions) Humanperformance aiding then focuses on provid-ing missing knowledge and correcting theknowledge bugs From the point of view ofcomputational power more knowledge andmore correct knowledge can be embodiedand delivered in the form of a rule-based ex-pert system following a knowledge acquisi-tion phase that determines the fine-graineddomain knowledge

But the more critical question for effectivehuman performance may be how knowledgeis activated and utilized in the actual prob-lem-solving environment (e g BransfordSherwood Vye and Rieser 1986 Cheng etaI 1986) The question concerns not merelywhether the problem solver knows some par-ticular piece of domain knowledge such asthe relationship between two entities Doeshe or she know that it is relevant to the prob-lem at hand and does he or she know how toutilize this knowledge in problem solvingStudies of education and training often showthat students successfully acquire knowledgethat is potentially relevant to solving domainproblems but that they often fail to exhibitskilled performance-for example differ-ences in solving mathematical exercisesversus word problems (see Gentner and Ste-vens 1983 for examples)

The fact that people possess relevantknowledge does not guarantee that thatknowledge will be activated and utilizedwhen needed in the actual problem-solvingenvironment This is the issue of expression ofknowledge Education and training tend toassume that if a person can be shown to pos-sess a piece of knowledge in any circum-stance then this knowledge should be acces-sible under all conditions in which it mightbe useful In contrast a variety of researchhas revealed dissociation effects wherebyknowledge accessed in one context remains

HUMAN FACTORS

inert in another (Bransford et aI 1986Cheng et aI 1986 Perkins and Martin 1986)For example Gick and Holyoak (1980) foundthat unless explicitly prompted people willfail to apply a recently learned problem-solu-tion strategy to an isomorphic problem (seealso Kotovsky Hayes and Simon 1985)Thus the fact that people possess relevantknowledge does not guarantee that thisknowledge will be activated when neededThe critical question is not to show that theproblem solver possesses domain knowledgebut rather the more stringent criterion thatsituation-relevant knowledge is accessibleunder the conditions in which the task is per-formed This has been called the problem ofinert knowledge-knowledge that is accessedonly in a restricted set of contexts

The general conclusion of studies on theproblem of inert knowledge is that possessionof the relevant domain knowledge or strate-gies by themselves is not sufficient to ensurethat this knowledge will be accessed in newcontexts Off-line training experiences needto promote an understanding of how con-cepts and procedures can function as tools forsolving relevant problems (Bransford et aI1986 Brown Bransford Ferrara and Cam-pione 1983 Brown and Campione 1986)Training has to be about more than simplystudent knowledge acquisition it must alsoenhance the expression of knowledge by con-ditionalizing knowledge to its use via infor-mation about triggering conditions andconstraints (Glaser 1984)

Similarly on-line representations of theworld can help or hinder problem solvers inrecognizing what information or strategiesare relevant to the problem at hand (Woods1986) For example Fischhoff Slovic andLichtenstein (1979) and Kruglanski Fried-land and Farkash (1984) found that judg-mental biases (eg representativeness) weregreatly reduced or eliminated when aspects

COGNITIVE ENGINEERING

of the situation cued the relevance of statisti-cal information and reasoning Thus one di-mension along which representations vary istheir ability to provide prompts to the knowl-edge relevant in a given context

The challenge for cognitive engineering isto study and develop ways to enhance the ex-pression of knowledge and to avoid inertknowledge What training content and expe-riences are necessary to develop condi tion-alized knowledge (Glaser 1984 Lesh 1987Perkins and Martin 1986) What representa-tions cue people about the knowledge that isrelevant to the current context of goals sys-tem state and practitioner intentions (Wie-cha and Henrion 1987 Woods and Roth1988)

about Systems

Cognitive engineering is about systems Onesource of tremendous confusion has been aninability to clearly define the systems of in-terest From one point of view the computerprogram being executed is the end applica-tion of concern In this case one often speaksof the interface the tasks performed withinthe syntax of the interface and human usersof the interface Notice that the applicationworld (what the interface is used for) isdeemphasized The bulk of work on human-computer interaction takes this perspectiveIssues of concern include designing for learn-ability (eg Brown and Newman 1985 Car-roll and Carrithers 1984 Kieras and Polson1985) and designing for ease and pleasureab-leness of use (Malone 1983 Norman 1983Shneiderman 1986)

A second perspective is to distinguish theinterface from the application world (Hollna-gel Mancini and Woods 1986 ManciniWoods and Hollnagel in press Miyata andNorman 1986 Rasmussen 1986 Stefik etaI 1985) For example text-editing tasks areperformed only in some larger context such

August 1988-421

as transcription data entry and composi-tion The interface is an external representa-tion of an application world that is a me-dium through which agents come to knowand act on the world-troubleshooting elec-tronic devices (Davis 1983) logistic mainte-nance systems managing data communica-tion networks managing power distributionnetworks medical diagnosis (Cohen et aI1987 Gadd and Pople 1987) aircraft andhelicopter flight decks (Pew et aI 1986) airtraffic control systems process control acci-dent response (Woods Roth and Pople1987) and command and control of a battle-field (eg Fischhoff et aI 1986) Tasks areproperties of the world in question althoughperformance of these fundamental tasks (iedemands) is affected by the design of the ex-ternal representation (eg Mitchell andSaisi 1987) The human is not a passive userof a computer program but is an active prob-lem-solver in some world Therefore we willgenerally refer to people as domain agents oractors or problem solvers and not as users

In part the difference in the foregoingviews can be traced to differences in the cog-nitive complexity of the domain task beingsupported Research on person-computer in-teraction has typically dealt with office ap-plications (eg word processors for docu-ment preparation or copying machines forduplicating material) in which the goals tobe accomplished (eg replace word 1 withword 2) and the steps required to accomplishthem are relatively straightforward Theseapplications fall at one extreme of the cogni-tive complexity space In contrast there aremany decision-making and supervisory envi-ronments (eg military situation assess-ment medical diagnosis) in which problemformulation situation assessment goal defi-nition plan generation and plan monitoringand adaptation are significantly more com-plex It is in designing interfaces and aids for

422-August 1988

these applications that it is essential to dis-tinguish the world to be acted on from theinterface or window on the world (how onecomes to know that world) and from agentswho can act directly or indirectly on theworld

The system of interest in design shouldnot be the machine problem solver per senor should the focus of interest in evaluationbe the performance of the machine problemsolver alone Ultimately the focus must bethe design and the performance of thehuman-machine problem-solving ensemble-how to couple human intelligence andmachine power in a single integrated systemthat maximizes overall performance

about Multiple Cognitive Agents

A large number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on theworld in question (eg command and con-trol process control data communicationnetworks) Not only do we need to be clearabout where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the worldwe also need to be clear about the differentagents who can act directly or indirectly onthe world Cognitive engineering must be ableto address systems with multiple cognitiveagents This applies to multiple human cog-nitive systems (often called distributeddecision making) eg (Fischhoff 1986Fischhoff et ai 1986 March and Weisinger-Baylon 1986 Rochlin La Porte and Rob-erts in press Schum 1980)

Because of the expansions in computa-tional powers the machine element can bethought of as a partially autonomous cogni-tive agent in its own right This raises theproblem of how to build a cognitive systemthat combines both human and machine cog-

HUMAN FACTORS

nitive systems or in other words joint cogni-tive systems (Hollnagel Mancini and Woods1986 Mancini et aI in press) When a systemincludes these machine agents the humanrole is not eliminated but shifted This meansthat changes in automation are changes inthe joint human-machine cognitive systemand the design goal is to maximize overallperformance

One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is toexamine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (egCoombs and Alty 1984) Following this meta-phor leads Muir (1987) to raise the questionof the role of trust between man and ma-chine in effective performance One provoca-tive question that Muirs analysis generatesis how does the level of trust between humanand machine problem solvers affect perfor-mance The practitioners judgment of ma-chine competence or predictability can bemiscalibrated leading to excessive trust ormistrust Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will deferto the machine even in areas that challengeor exceed the machines range of competence

Another question concerns how trust is es-tablished between human and machineTrust or mistrust is based on cumulative ex-perience with the other agent that providesevidence about enduring characteristics ofthe agent such as competence and predict-ability This means that factors about hownew technology is introduced into the workenvironment can playa critical role in build-ing or undermining trust in the machineproblem solver If this stage of technology in-troduction is mishandled (for example prac-titioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

420-August 1988

tutes a theory of what these individualsknow (including misconceptions) Humanperformance aiding then focuses on provid-ing missing knowledge and correcting theknowledge bugs From the point of view ofcomputational power more knowledge andmore correct knowledge can be embodiedand delivered in the form of a rule-based ex-pert system following a knowledge acquisi-tion phase that determines the fine-graineddomain knowledge

But the more critical question for effectivehuman performance may be how knowledgeis activated and utilized in the actual prob-lem-solving environment (e g BransfordSherwood Vye and Rieser 1986 Cheng etaI 1986) The question concerns not merelywhether the problem solver knows some par-ticular piece of domain knowledge such asthe relationship between two entities Doeshe or she know that it is relevant to the prob-lem at hand and does he or she know how toutilize this knowledge in problem solvingStudies of education and training often showthat students successfully acquire knowledgethat is potentially relevant to solving domainproblems but that they often fail to exhibitskilled performance-for example differ-ences in solving mathematical exercisesversus word problems (see Gentner and Ste-vens 1983 for examples)

The fact that people possess relevantknowledge does not guarantee that thatknowledge will be activated and utilizedwhen needed in the actual problem-solvingenvironment This is the issue of expression ofknowledge Education and training tend toassume that if a person can be shown to pos-sess a piece of knowledge in any circum-stance then this knowledge should be acces-sible under all conditions in which it mightbe useful In contrast a variety of researchhas revealed dissociation effects wherebyknowledge accessed in one context remains

HUMAN FACTORS

inert in another (Bransford et aI 1986Cheng et aI 1986 Perkins and Martin 1986)For example Gick and Holyoak (1980) foundthat unless explicitly prompted people willfail to apply a recently learned problem-solu-tion strategy to an isomorphic problem (seealso Kotovsky Hayes and Simon 1985)Thus the fact that people possess relevantknowledge does not guarantee that thisknowledge will be activated when neededThe critical question is not to show that theproblem solver possesses domain knowledgebut rather the more stringent criterion thatsituation-relevant knowledge is accessibleunder the conditions in which the task is per-formed This has been called the problem ofinert knowledge-knowledge that is accessedonly in a restricted set of contexts

The general conclusion of studies on theproblem of inert knowledge is that possessionof the relevant domain knowledge or strate-gies by themselves is not sufficient to ensurethat this knowledge will be accessed in newcontexts Off-line training experiences needto promote an understanding of how con-cepts and procedures can function as tools forsolving relevant problems (Bransford et aI1986 Brown Bransford Ferrara and Cam-pione 1983 Brown and Campione 1986)Training has to be about more than simplystudent knowledge acquisition it must alsoenhance the expression of knowledge by con-ditionalizing knowledge to its use via infor-mation about triggering conditions andconstraints (Glaser 1984)

Similarly on-line representations of theworld can help or hinder problem solvers inrecognizing what information or strategiesare relevant to the problem at hand (Woods1986) For example Fischhoff Slovic andLichtenstein (1979) and Kruglanski Fried-land and Farkash (1984) found that judg-mental biases (eg representativeness) weregreatly reduced or eliminated when aspects

COGNITIVE ENGINEERING

of the situation cued the relevance of statisti-cal information and reasoning Thus one di-mension along which representations vary istheir ability to provide prompts to the knowl-edge relevant in a given context

The challenge for cognitive engineering isto study and develop ways to enhance the ex-pression of knowledge and to avoid inertknowledge What training content and expe-riences are necessary to develop condi tion-alized knowledge (Glaser 1984 Lesh 1987Perkins and Martin 1986) What representa-tions cue people about the knowledge that isrelevant to the current context of goals sys-tem state and practitioner intentions (Wie-cha and Henrion 1987 Woods and Roth1988)

about Systems

Cognitive engineering is about systems Onesource of tremendous confusion has been aninability to clearly define the systems of in-terest From one point of view the computerprogram being executed is the end applica-tion of concern In this case one often speaksof the interface the tasks performed withinthe syntax of the interface and human usersof the interface Notice that the applicationworld (what the interface is used for) isdeemphasized The bulk of work on human-computer interaction takes this perspectiveIssues of concern include designing for learn-ability (eg Brown and Newman 1985 Car-roll and Carrithers 1984 Kieras and Polson1985) and designing for ease and pleasureab-leness of use (Malone 1983 Norman 1983Shneiderman 1986)

A second perspective is to distinguish theinterface from the application world (Hollna-gel Mancini and Woods 1986 ManciniWoods and Hollnagel in press Miyata andNorman 1986 Rasmussen 1986 Stefik etaI 1985) For example text-editing tasks areperformed only in some larger context such

August 1988-421

as transcription data entry and composi-tion The interface is an external representa-tion of an application world that is a me-dium through which agents come to knowand act on the world-troubleshooting elec-tronic devices (Davis 1983) logistic mainte-nance systems managing data communica-tion networks managing power distributionnetworks medical diagnosis (Cohen et aI1987 Gadd and Pople 1987) aircraft andhelicopter flight decks (Pew et aI 1986) airtraffic control systems process control acci-dent response (Woods Roth and Pople1987) and command and control of a battle-field (eg Fischhoff et aI 1986) Tasks areproperties of the world in question althoughperformance of these fundamental tasks (iedemands) is affected by the design of the ex-ternal representation (eg Mitchell andSaisi 1987) The human is not a passive userof a computer program but is an active prob-lem-solver in some world Therefore we willgenerally refer to people as domain agents oractors or problem solvers and not as users

In part the difference in the foregoingviews can be traced to differences in the cog-nitive complexity of the domain task beingsupported Research on person-computer in-teraction has typically dealt with office ap-plications (eg word processors for docu-ment preparation or copying machines forduplicating material) in which the goals tobe accomplished (eg replace word 1 withword 2) and the steps required to accomplishthem are relatively straightforward Theseapplications fall at one extreme of the cogni-tive complexity space In contrast there aremany decision-making and supervisory envi-ronments (eg military situation assess-ment medical diagnosis) in which problemformulation situation assessment goal defi-nition plan generation and plan monitoringand adaptation are significantly more com-plex It is in designing interfaces and aids for

422-August 1988

these applications that it is essential to dis-tinguish the world to be acted on from theinterface or window on the world (how onecomes to know that world) and from agentswho can act directly or indirectly on theworld

The system of interest in design shouldnot be the machine problem solver per senor should the focus of interest in evaluationbe the performance of the machine problemsolver alone Ultimately the focus must bethe design and the performance of thehuman-machine problem-solving ensemble-how to couple human intelligence andmachine power in a single integrated systemthat maximizes overall performance

about Multiple Cognitive Agents

A large number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on theworld in question (eg command and con-trol process control data communicationnetworks) Not only do we need to be clearabout where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the worldwe also need to be clear about the differentagents who can act directly or indirectly onthe world Cognitive engineering must be ableto address systems with multiple cognitiveagents This applies to multiple human cog-nitive systems (often called distributeddecision making) eg (Fischhoff 1986Fischhoff et ai 1986 March and Weisinger-Baylon 1986 Rochlin La Porte and Rob-erts in press Schum 1980)

Because of the expansions in computa-tional powers the machine element can bethought of as a partially autonomous cogni-tive agent in its own right This raises theproblem of how to build a cognitive systemthat combines both human and machine cog-

HUMAN FACTORS

nitive systems or in other words joint cogni-tive systems (Hollnagel Mancini and Woods1986 Mancini et aI in press) When a systemincludes these machine agents the humanrole is not eliminated but shifted This meansthat changes in automation are changes inthe joint human-machine cognitive systemand the design goal is to maximize overallperformance

One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is toexamine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (egCoombs and Alty 1984) Following this meta-phor leads Muir (1987) to raise the questionof the role of trust between man and ma-chine in effective performance One provoca-tive question that Muirs analysis generatesis how does the level of trust between humanand machine problem solvers affect perfor-mance The practitioners judgment of ma-chine competence or predictability can bemiscalibrated leading to excessive trust ormistrust Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will deferto the machine even in areas that challengeor exceed the machines range of competence

Another question concerns how trust is es-tablished between human and machineTrust or mistrust is based on cumulative ex-perience with the other agent that providesevidence about enduring characteristics ofthe agent such as competence and predict-ability This means that factors about hownew technology is introduced into the workenvironment can playa critical role in build-ing or undermining trust in the machineproblem solver If this stage of technology in-troduction is mishandled (for example prac-titioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

COGNITIVE ENGINEERING

of the situation cued the relevance of statisti-cal information and reasoning Thus one di-mension along which representations vary istheir ability to provide prompts to the knowl-edge relevant in a given context

The challenge for cognitive engineering isto study and develop ways to enhance the ex-pression of knowledge and to avoid inertknowledge What training content and expe-riences are necessary to develop condi tion-alized knowledge (Glaser 1984 Lesh 1987Perkins and Martin 1986) What representa-tions cue people about the knowledge that isrelevant to the current context of goals sys-tem state and practitioner intentions (Wie-cha and Henrion 1987 Woods and Roth1988)

about Systems

Cognitive engineering is about systems Onesource of tremendous confusion has been aninability to clearly define the systems of in-terest From one point of view the computerprogram being executed is the end applica-tion of concern In this case one often speaksof the interface the tasks performed withinthe syntax of the interface and human usersof the interface Notice that the applicationworld (what the interface is used for) isdeemphasized The bulk of work on human-computer interaction takes this perspectiveIssues of concern include designing for learn-ability (eg Brown and Newman 1985 Car-roll and Carrithers 1984 Kieras and Polson1985) and designing for ease and pleasureab-leness of use (Malone 1983 Norman 1983Shneiderman 1986)

A second perspective is to distinguish theinterface from the application world (Hollna-gel Mancini and Woods 1986 ManciniWoods and Hollnagel in press Miyata andNorman 1986 Rasmussen 1986 Stefik etaI 1985) For example text-editing tasks areperformed only in some larger context such

August 1988-421

as transcription data entry and composi-tion The interface is an external representa-tion of an application world that is a me-dium through which agents come to knowand act on the world-troubleshooting elec-tronic devices (Davis 1983) logistic mainte-nance systems managing data communica-tion networks managing power distributionnetworks medical diagnosis (Cohen et aI1987 Gadd and Pople 1987) aircraft andhelicopter flight decks (Pew et aI 1986) airtraffic control systems process control acci-dent response (Woods Roth and Pople1987) and command and control of a battle-field (eg Fischhoff et aI 1986) Tasks areproperties of the world in question althoughperformance of these fundamental tasks (iedemands) is affected by the design of the ex-ternal representation (eg Mitchell andSaisi 1987) The human is not a passive userof a computer program but is an active prob-lem-solver in some world Therefore we willgenerally refer to people as domain agents oractors or problem solvers and not as users

In part the difference in the foregoingviews can be traced to differences in the cog-nitive complexity of the domain task beingsupported Research on person-computer in-teraction has typically dealt with office ap-plications (eg word processors for docu-ment preparation or copying machines forduplicating material) in which the goals tobe accomplished (eg replace word 1 withword 2) and the steps required to accomplishthem are relatively straightforward Theseapplications fall at one extreme of the cogni-tive complexity space In contrast there aremany decision-making and supervisory envi-ronments (eg military situation assess-ment medical diagnosis) in which problemformulation situation assessment goal defi-nition plan generation and plan monitoringand adaptation are significantly more com-plex It is in designing interfaces and aids for

422-August 1988

these applications that it is essential to dis-tinguish the world to be acted on from theinterface or window on the world (how onecomes to know that world) and from agentswho can act directly or indirectly on theworld

The system of interest in design shouldnot be the machine problem solver per senor should the focus of interest in evaluationbe the performance of the machine problemsolver alone Ultimately the focus must bethe design and the performance of thehuman-machine problem-solving ensemble-how to couple human intelligence andmachine power in a single integrated systemthat maximizes overall performance

about Multiple Cognitive Agents

A large number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on theworld in question (eg command and con-trol process control data communicationnetworks) Not only do we need to be clearabout where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the worldwe also need to be clear about the differentagents who can act directly or indirectly onthe world Cognitive engineering must be ableto address systems with multiple cognitiveagents This applies to multiple human cog-nitive systems (often called distributeddecision making) eg (Fischhoff 1986Fischhoff et ai 1986 March and Weisinger-Baylon 1986 Rochlin La Porte and Rob-erts in press Schum 1980)

Because of the expansions in computa-tional powers the machine element can bethought of as a partially autonomous cogni-tive agent in its own right This raises theproblem of how to build a cognitive systemthat combines both human and machine cog-

HUMAN FACTORS

nitive systems or in other words joint cogni-tive systems (Hollnagel Mancini and Woods1986 Mancini et aI in press) When a systemincludes these machine agents the humanrole is not eliminated but shifted This meansthat changes in automation are changes inthe joint human-machine cognitive systemand the design goal is to maximize overallperformance

One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is toexamine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (egCoombs and Alty 1984) Following this meta-phor leads Muir (1987) to raise the questionof the role of trust between man and ma-chine in effective performance One provoca-tive question that Muirs analysis generatesis how does the level of trust between humanand machine problem solvers affect perfor-mance The practitioners judgment of ma-chine competence or predictability can bemiscalibrated leading to excessive trust ormistrust Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will deferto the machine even in areas that challengeor exceed the machines range of competence

Another question concerns how trust is es-tablished between human and machineTrust or mistrust is based on cumulative ex-perience with the other agent that providesevidence about enduring characteristics ofthe agent such as competence and predict-ability This means that factors about hownew technology is introduced into the workenvironment can playa critical role in build-ing or undermining trust in the machineproblem solver If this stage of technology in-troduction is mishandled (for example prac-titioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

422-August 1988

these applications that it is essential to dis-tinguish the world to be acted on from theinterface or window on the world (how onecomes to know that world) and from agentswho can act directly or indirectly on theworld

The system of interest in design shouldnot be the machine problem solver per senor should the focus of interest in evaluationbe the performance of the machine problemsolver alone Ultimately the focus must bethe design and the performance of thehuman-machine problem-solving ensemble-how to couple human intelligence andmachine power in a single integrated systemthat maximizes overall performance

about Multiple Cognitive Agents

A large number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on theworld in question (eg command and con-trol process control data communicationnetworks) Not only do we need to be clearabout where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the worldwe also need to be clear about the differentagents who can act directly or indirectly onthe world Cognitive engineering must be ableto address systems with multiple cognitiveagents This applies to multiple human cog-nitive systems (often called distributeddecision making) eg (Fischhoff 1986Fischhoff et ai 1986 March and Weisinger-Baylon 1986 Rochlin La Porte and Rob-erts in press Schum 1980)

Because of the expansions in computa-tional powers the machine element can bethought of as a partially autonomous cogni-tive agent in its own right This raises theproblem of how to build a cognitive systemthat combines both human and machine cog-

HUMAN FACTORS

nitive systems or in other words joint cogni-tive systems (Hollnagel Mancini and Woods1986 Mancini et aI in press) When a systemincludes these machine agents the humanrole is not eliminated but shifted This meansthat changes in automation are changes inthe joint human-machine cognitive systemand the design goal is to maximize overallperformance

One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is toexamine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (egCoombs and Alty 1984) Following this meta-phor leads Muir (1987) to raise the questionof the role of trust between man and ma-chine in effective performance One provoca-tive question that Muirs analysis generatesis how does the level of trust between humanand machine problem solvers affect perfor-mance The practitioners judgment of ma-chine competence or predictability can bemiscalibrated leading to excessive trust ormistrust Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will deferto the machine even in areas that challengeor exceed the machines range of competence

Another question concerns how trust is es-tablished between human and machineTrust or mistrust is based on cumulative ex-perience with the other agent that providesevidence about enduring characteristics ofthe agent such as competence and predict-ability This means that factors about hownew technology is introduced into the workenvironment can playa critical role in build-ing or undermining trust in the machineproblem solver If this stage of technology in-troduction is mishandled (for example prac-titioners are exposed to the system before it

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

COGNITIVE ENGINEERING

is adequately debugged) the practitionerstrust in the machines competence can be un-dermined Muirs analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level ofcalibration Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process

A second metaphor that is frequently in-voked is supervisory control (Rasmussen1986 Sheridan and Hennessy 1984) Againthe machine element is thought of as a se-miautonomous cognitive system but in thiscase it is a lower-order subordinate albeitpartially autonomous The human supervisorgenerally has a wider range of responsibilityand he or she possesses ultimate responsibil-ity and authority Boy (1987) uses this meta-phor to guide the development of assistantsystems built from AI technology

In order for a supervisory control architec-ture between human and machine agents tofunction effectively several requirementsmust be met that as Woods (1986) haspointed out are often overlooked when tool-driven constraints dominate design Firstthe supervisor must have real as well as titu-lar authority machine problem solvers canbe designed and introduced in such a waythat the human retains the responsibility foroutcomes without any effective authoritySecond the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem Roth et al (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chines problem solving had broken downeven when the machines designer had pro-vided no such mechanisms Third in order tobe able to supervise another agent there isneed for a common or shared representation

August 1988-423

of the state of the world and of the state of theproblem-solving process (Woods and Roth1988b) otherwise communication betweenthe agents will break down (eg Suchman1987)

Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (egAllen and Perrault 1980 Quinn and Russell1986) However the supervisory control met-aphor highlights that it is at least as impor-tant to pay attention to what information orknowledge people need to track the intelli-gent machines state of mind (Woods andRoth 1988a)

A third metaphor is to consider the newmachine capabilities as extensions and ex-pansions along a dimension of machinepower In this metaphor machines are toolspeople are tool builders and tool users Tech-nological development has moved from phys-ical tools (tools that magnify capacity forphysical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now with the arrivalof AI technology to cognitive tools (Althoughthis type of tool has a much longer history-eg aide-memories or decision analysis-AIhas certainly increased the interest in andability to provide cognitive tools)

In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (eg Ehn and Kyng 1984 Such-man 1987 Woods 1986) At one extreme themachine can be a prosthesis that compen-sates for a deficiency in human reasoning orproblem solving This could be a local defi-ciency for the population of expected humanpractitioners or a global weakness in humanreasoning At the other extreme the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

424-August 1988

human practitioner (Woods 1986) The ma-chine aids the practitioner by providing in-creased or new kinds of resources (eitherknowledge resources or processing resourcessuch as an expanded field of attention)

The extra resources may support improvedperformance in several ways One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his orher resources on higher-level issues andstrategies Examples include keeping track ofmultiple ongoing activities in an externalmemory performing basic data computa-tions or transformations and collecting theevidence related to decisions about particu-lar domain issues as occurred recently withnew computer-based displays in nuclearpower plant control rooms Extra resourcesmay help to improve performance in anotherway by allowing a restructuring of how thehuman performs the task shifting perfor-mance onto a new higher plateau (see Pea1985) This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities As Pea (1985)points out the amplification metaphor im-plies that support systems improve humanperformance by increasing the strength orpower of the cognitive processes the humanproblem solver goes through to solve theproblem but without any change in the un-derlying activities processes or strategiesthat determine how the problem is solvedAlternatively the resources provided (or notprovided) by new performance aids and in-terface systems can support restructuring ofthe activities processes or strategies thatcarry out the cognitive functions relevant toperforming domain tasks (eg Woods andRoth 1988) New levels of performance arenow possible and the kinds of errors one isprone to (and therefore the consequences oferrors) change as well

HUMAN FACTORS

The instrumental perspective suggests thatthe most effective power provided by goodcognitive tools is conceptualization power(Woods and Roth 1988a) The importance ofconceptualization power in effective prob-lem-solving performance is often overlookedbecause the part of the problem-solving pro-cess that it most crucially affects problemformulation and reformulation is often leftout of studies of problem solving and the de-sign basis of new support systems Supportsystems that increase conceptualizationpower (1) enhance a problem solvers abilityto experiment with possible worlds or strate-gies (eg Hollan Hutchins and Weitzman1984 Pea 1985 Woods et aI 1987) (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order tobetter see the implications of concept and tohelp one restructure ones view of the prob-lem (Becker and Cleveland 1984 Coombsand Hartley 1987 Hutchins Hollan andNorman 1985 Pople 1985) and (3) to en-hance error detection by providing betterfeedback about the effectsresults of actions(Rizzo Bagnara and Visciola 1987)

Problem-Driven

Cognitive engineering is problem-driventool-constrained This means that cognitiveengineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is the cognitive problems to be solvedor challenges to be met (eg Rasmussen1986 Woods and Hollnagel 1987)

To build a cognitive description of a prob-lem-solving world one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by theapplication world in question and with char-acteristics of the cognitive agents both forexisting and prospective changes in the

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

COGNITIVE ENGINEERING

world Building of a cognitive description ispart of a problem-driven approach to the ap-plication of computational power

The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world to help thehuman function more expertly to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches) The results of this process then canbe deployed in many possible ways as con-strained by tool-building limitations andtool-building possibili ties-explora tiontraining worlds new information represen-tation aids advisory systems or machineproblem solvers (see Roth and Woods 1988Woods and Roth 1988)

In tool-driven approaches knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is the languageof implementation is used as a cognitive lan-guage Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge

The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces ofknowledge mean about problem solving inthe domain (Woods 1988) Acquiring andusing a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building intelli-gent machines (Roth et aI 1987) Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportableThere is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewerbrittle machine problem solvers (eg Clan-

August 1988-425

cey 1985 Coombs 1986 Gruber and Cohen1987)

AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

TECHNOLOGIES INTERACT

To illustrate the role of cognitive engineer-ing in the deployment of new computationalpowers consider a case in human-computerinteraction (for other cases see Mitchell andForren 1987 Mitchell and Saisi 1987 Rothand Woods 1988 Woods and Roth 1988) Itis one example of how purely technology-driven deployment of new automation capa-bilities can produce unintended and unfore-seen negative consequences In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases Because cognitiveengineering issues were not considered in theapplication of the new technology a high-level person-machine performance problemresulted-the getting lost phenomenon(Woods 1984) Based on a cognitive analysisof the worlds demands it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods 1985) Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system thatfor the most part was within the technologi-cal boundaries set by the original technologychosen for the application

The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation The system was built basedon a network data base shell with a built-in interface for navigating the network (Rob-ertson McCraken and Newell 1981) Theshell aiready treated human-computer inter-face issues so it was assumed possible tocreate the computerized system simply by

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

426-August 1988

entering domain knowledge (ie the currentinstructions as implemented for the papermedium) into the interface and network database framework provided by the shell

The system contained two kinds of framesmenu frames which served to point to otherframes and content frames which containedinstructions from the paper procedures (gen-erally one procedure step per frame) In pre-liminary tests of the system it was found thatpeople uniformly failed to complete recoverytasks with procedures computerized in thisway They became disoriented or lost un-able to keep procedure steps in pace withplant behavior unable to determine wherethey were in the network of frames unable todecide where to go next or unable even tofind places where they knew they should be(ie they diagnosed the situation knew theappropriate responses as trained operatorsyet could not find the relevant proceduralsteps in the network) These results werefound with people experienced with thepaper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it

What was the source of the disorientationproblem It resulted from a failure to analyzethe cognitive demands associated with usingprocedures in an externally paced world Forexample in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending onplant conditions and plant responses to oper-ator actions As a result operators need to beable to rapidly transition across procedureboundaries and to return to incomplete stepsBecause of the size of a frame there was avery high proportion of menu frames relativeto content frames and the content framesprovided a narrow window on the worldThis structure made it difficult to read ahead

HUMAN FACTORS

to anticipate instructions to mark stepspending completion and return to them eas-ily to see the organization of the steps or tomark a trail of activities carried out duringthe recovery Many activities that are inher-ently easy to perform in a physical bookturned out to be very difficult to carry out-for example reading ahead The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities

These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations Followingprocedures was not simply a matter of lin-ear step-by-step execution of instructionsrather it required the ability to maintain abroad context of the purpose and relation-ships among the elements in the procedure(see also Brown et aI 1982 Roth et aI1987) Operators needed to maintain aware-ness of the global context (ie how a givenstep fits into the overall plan) to anticipatethe need for actions by looking ahead and tomonitor for changes in plant state that wouldrequire adaptation of the current responseplan

A variety of cognitive engineering tech-niques were utilized in a new interface designto support the demands (see Woods 1984)First a spatial metaphor was used to makethe system more like a physical book Sec-ond display selectionmovement optionswere presented in parallel rather than se-quentially with procedural information (de-fining two types of windows in the interface)Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system Inaddition incomplete steps were automati-

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

COGNITIVE ENGINEERING

cally tracked and those steps were made di-rectly accessible (eg electronic bookmarksor placeholders)

To provide the global context within whichthe current procedure step occurs the step ofinterest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas1986 Woods 1984) Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current responseplan that are relevant to the current contextin parallel with current relevant options andthe current region of interest in the proce-dures (a third window) Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively inparallel which then determined the numberof windows required Also note that the cog-nitive engineering redesign was with a fewexceptions directly implementable withinthe base capabilities of the interface shell

As the foregoing example saliently demon-strates there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment However if we under-stand the cognitive requirements imposed bythe domain then a variety of techniques canbe employed to build support systems forthose functions

SUMMARY

The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems or are we con-demned to simply build what can be builtpractically and wait for the judgment of ex-perience Is principle-driven design possi-ble

A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

August 1988-427

niques necessary for principle-driven designCognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive taskperformance drive the development of toolsto support the human problem solver Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-nitive tasks and the development of an effec-tive joint architecture In this paper we haveattempted to outline the questions that needto be answered to make this promise real andto point to research that already has begun toprovide the necessary concepts and tech-niques

REFERENCESAllen J and Perrault C (1980) Analyzing intention in

utterances ArtificialIntelligence 15143-178Becker R A and Cleveland W S (1984) Brushing the

scatlerplot matrix High-interaction graphical methodsfor analyzing multidimensional data (Tech Report)Murray Hill NJ ATampT Bell Laboratories

Boy G A (1987) Operator assistant systems Interna-tional Journal of Man-Machine Studies 27 541-554Also in G Mancini D Woods and E Hollnagel (Eds)(in press) Cognitive engineering in dynamic worldsLondon Academic Press

Bransford J Sherwood R Vye N and Rieser J (1986)Teaching and problem solving Research foundationsAmerican Psychologist 41 1078-1089

Brown A L Bransford J D Ferrara R A and Cam-pione J C (1983) Learning remembering and under-standing In J H Flavell and E M Markman (Eds)Carmichaels manual of child psychology New YorkWiley

Brown A L and Campione J C (1986) Psychologicaltheory and the study of learning disabilities AmericanPsychologist 411059-1068

Brown J S and Burton R R (1978) Diagnostic modelsfor procedural bugs in basic mathematics CognitiveScience 2 155-192

Brown J 5 Moran T P and Williams M D (1982) Thesemantics of procedures (Tech Report) Palo Alto CAXerox Palo Alto Research Center

Brown J 5 and Newman S E (1985) Issues in cogni-tive and social ergonomics From our house to Bau-haus Human-Computer Interaction 1 359-391

Brown J S and VanLehn K (1980) Repair theory Agenerative theory of bugs in procedural skills Cogni-tive Science 4 379-426

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

428-August 1988

Carroll J M and Carrithers C (1984) Training wheels ina user interface Communications of the ACM 27800-806

Cheng P W and Holyoak K J (1985) Pragmatic reason-ing schemas Cognitive Psychology 17 391-416

Cheng P Holyoak K Nisbett R and Oliver 1 (1986)Pragmatic versus syntactic approaches to training de-ductive reasoning Cognitive Psychology 18293-328

Clancey W J (1985) Heuristic classification Artificial In-telligence 27 289-350

Cohen P Day D Delisio J Greenberg M Kjeldsen Rand Suthers D (1987) Management of uncertainty inmedicine In Proceedings of the IEEE Conference onComputers and Communications New York IEEE

Coombs M J (1986) Artificial intelligence and cognitivetechnology Foundations and perspectives In E Holl-nagel G Mancini and D D Woods (Eds) Intelligentdecision support in process environments New YorkSpringer- Verlag

Coombs M J and Alty J 1 (1984) Expert systems Analternative paradigm International Journal of Man-Machine Studies 20 21-43

Coombs MJ and Hartley R T (1987) The MGR algo-rithm and its application to the generation of explana-tions for novel events International Journal of Man-Machine Studies 27 679-708 Also in Mancini GWoods D and Hollnagel E (Eds) (in press) Cogni-tive engineering in dynamic worlds London AcademicPress

Davis R (1983) Reasoning from first principles in elec-tronic troubleshooting International Journal of Man-Machine Studies 19403-423

De Keyser V (1986) Les interactions hommes-machineCaracteristiques et utilisations des different supportsdinformation par les operateurs (Person-machine inter-action How operators use different information chan-nels) Rapport Politique ScientifiqueFAST no 8Liege Belgium Psychologie du Travail Universite deIEtat a Liege

Dennett D (1982) Beyond belief In A Woodfield (Ed)Thought and object Oxford Clarendon Press

Dorner D (1983) Heuristics and cognition in complexsystems In R Groner M Groner and W F Bischof(Eds) Methods of heuristics Hillsdale NJ Erlbaum

Ehn P and Kyng M (1984) A tool perspective on design ofinteractive computer support for skilled workers Unpub-lished manuscript Swedish Center for Working LifeStockholm

Elm W C and Woods D D (1985) Getting lost A casestudy in interface design In Proceedings of the HumanFactors Society 29th Annual Meeting (pp 927-931)Santa Monica CA Human Factors Society

Fischhoff B (1986) Decision making in complex systemsIn E Hollnagel G Mancini and D D Woods (Eds)Intelligent decision support New York Springer-Ver-lag

Fischhoff B Slovic P and Lichtenstein S (1979) Im-proving intuitive judgment by subjective sensitivityanalysis Organizational Behavior and Human Perfor-mance 23 339-359

Fischhoff B Lanir Z and Johnson S (1986) Militaryrisk taking and modem C31(Tech Report 86-2) EugeneOR Decision Research

Funder D C (1987) Errors and mistakes Evaluating theaccuracy of social judgments Psychological Bulletin10175-90

Furnas G W (1986) Generalized fisheye views In M

HUMAN FACTORS

Mantei and P Orbeton (Eds) Human factors in com-puting systems CHJ86 Conference Proceedings (pp16-23) New York ACMSIGCHI

Gadd C S and Pople H E (1987) An interpretation syn-thesis model of medical teaching rounds discourseImplications for expert system interaction Interna-tional Journal of Educational Research 1

Gentner D and Stevens A 1 (Eds) (1983) Mentalmodels Hillsdale NJ Erlbaum

Gibson J J (1979) The ecological approach to visual per-ception Boston Houghton Mifflin

Gick M 1 and Holyoak K J (1980) Analogical problemsolving Cognitive Psychology 12 306-365

Glaser R (1984) Education and thinking The role ofknowledge American Psychologist 39 93-104

Gruber T and Cohen P (1987) Design for acquisitionPrinciples of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-quisition for knowledge-based systems) InternationalJournal of Man-Machine Studies 26 143-159

Henderson A and Card S (1986) Rooms The use of mul-tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech Report)Palo Alto Xerox PARCo

Hirschhorn 1 (1984) Beyond mechanization Work andtechnology in a postindustrial age Cambridge MA MITPress

Hollan J Hutchins E and Weitzman 1 (1984)Steamer An interactive inspectable simulation-basedtraining system AI Magazine 4 15-27

Hollnagel E Mancini G and Woods D D (Eds) (1986)Intelligent decision support in process environmentsNew York Springer-Verlag

Hollnagel E and Woods D D (1983) Cognitive systemsengineering New wine in new bottles InternationalJournal of Man-Machine Studies 18 583-600

Hoogovens Report (1976) Human factors evaluation Hoo-govens No2 hot strip mill (Tech Report FR251) Lon-don British Steel CorporationHoogovens

Hutchins E Hollan J and Norman D A (1985) Directmanipulation interfaces Human-Computer Interac-tion 1311-338

James W (1890) The principles of psychology New YorkHolt

Kieras D E and Polson P G (1985) An approach to theformal analysis of user complexity International Jour-nal of Man-Machine Studies 22 365-394

Klein G A (in press) Recognition-primed decisions InW B Rouse (td) Advances in man-machine researchvol 5 Greenwich CT JAI Press

Kotovsky K Hayes J R and Simon H A (1985) Whyare some problems hard Evidence from Tower ofHanoi Cognitive Psychology 17 248-294

Kruglanski A Friedland N and Farkash E (1984) Laypersons sensitivity to statistical information The caseof high perceived applicability Journal of Personalityand Social Psychology 46 503-518

Lesh R (1987) The evolution of problem representationsin the presence of powerful conceptual amplifiers InC Janvier (Ed) Problems of representation in the teach-ing and learning of mathematics Hillsdale NJ Erl-baum

Malone T W (1983) How do people organize their desksImplications for designing office automation systemsACM Transactions on Office Information Systems 199-112

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

COGNITIVE ENGINEERING

Mancini G Woods D D and Hollnagel E (Eds) (inpress) Cognitive engineering in dynamic worlds Lon-don Academic Press (Special issue of InternationalJournal of Man-Machine Studies vol 27)

March J G and Weisinger-Baylon R (Eds) (1986) Am-biguity and command Marshfield MA Pitman Pub-lishing

McKendree 1 and Carroll J M (1986) Advising roles ofa computer consultant In M Mantei and P Oberton(Eds) Human factors in computing systems CHI86Conference Proceedings (pp 35-40) New York ACMSIGCHI

Miller P L (1983) ATTENDING Critiquing a physiciansmanagement plan IEEE Transactions on Pattern Anal-ysis and Machine Intelligence PAMI-5 449-461

Mitchell C and Forren M G (1987) Multimodal userinput to supervisory control systems Voice-aug-mented keyboard IEEE Transactions on SystemsMan and Cybernetics SMC-I7 594-607

Mitchell C and Saisi D (1987) Use of model-based qual-itative icons and adaptive windows in workstations forsupervisory control systems IEEE Transactions onSystems Man and Cybernetics SMC-I7 573-593

Miyata Y and Norman D A (1986) Psychological issuesin support of multiple activities In D A Norman andS W Draper (Eds) User-centered system design Newperspectives on human-computer interaction HillsdaleNJ Erlbaum

Montmollin M de and De Keyser V (1985) Expert logicvs operator logic In G Johannsen G Mancini and LMartensson (Eds) Analysis design and evaluation ofman-machine systems CEC-JRC Ispra Italy IFAC

Muir B (1987) Trust between humans and machinesln-ternational Journal of Man-Machine Studies 27527-539 Also in Mancini G Woods D and Hollna-gel E (Eds) (in press) Cognitive engineering in dy-namic worlds London Academic Press

Newell A and Card S K (1985) The prospects for psy-chological science in human-computer interactionHuman-Computer Interaction I 209-242

Noble D F (1984) Forces of production A social history ofindustrial automation New York Alfred A Knopf

Norman D A (1981) Steps towards a cognitive engineering(Tech Report) San Diego University of CaliforniaSan Diego Program in Cognitive Science

Norman D A (1983) Design rules based on analyses ofhuman error Communications of the ACM 26254-258

Norman D A and Draper S W (1986) User-centeredsystem design New perspectives on human-computer in-teraction Hillsdale NJ Erlbaum

Pea R D (1985) Beyond amplification Using the com-puter to reorganize mental functioning Educationalpsychologist 20 167-182

Perkins D and Martin F (1986) Fragile knowledge andneglected strategies in novice programmers In E So-loway and S Iyengar (Eds) Empirical studies of pro-grammers Norwood NJ Ablex

Pew R W et at (1986) Cockpit automation technology(Tech Report 6133) Cambridge MA BBN Laborato-ries Inc

Pope R H (1978) Power station control room and deskdesign Alarm system and experience in the use of CRTdisplays In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion Cannes France

August 1988-429

Pople H Jr (1985) Evolution of an expert system Frominternist to caduceus In L De Lotto and M Stefanelli(Eds) Artificial intelligence in medicine New York El-sevier Science Publishers

Quinn L and Russell D M (1986) Intelligent interfacesUser models and planners In M Mantei and P Ober-ton (Eds) Human factors in computing systemsCHI86 Conference Proceedings (pp 314-320) NewYork ACMSIGCHI

Rasmussen J (1986) Information processing and human-machine interaction An approach to cognitive engineer-ing New York North-Holland

Rizzo A Bagnara S and Visciola M (1987) Humanerror detection processes International Journal ofMan-Machine Studies 27 555-570 Also in ManciniG Woods D and Hollnagel E (Eds) (in press) Cog-nitive engineering in dynamic worlds London Aca-demic Press

Robertson G McCracken D and Newell A (1981) TheZOG approach to man-machine communication Inter-nationaJ ournal of M an-Machine Studies 14 461-488

Rochlin G I La Porte T R and Roberts K H (inpress) The self-designing high-reliability organiza-tion Aircraft carrier flight operations at sea NavalWar College Review

Roth E M Bennett K and Woods D D (1987) Humaninteraction with an intelligent machine Interna-tional Journal of Man-Machine Studies 27 479-525Also in Mancini G Woods D and Hollnagel E(Eds) (in press) Cognitive engineering in dynamicworlds London Academic Press

Roth E M and Woods D D (1988) Aiding human per-formance L Cognitive analysis Le Travail Humain51(1)39-64

Schum D A (1980) Current developments in research oncascaded inference In T S Wallstein (Ed) Cognitiveprocesses in decision and choice behavior HillsdaleNJ Erlbaurn

Selfridge O G Rissland E L and Arbib M A (1984)Adaptive control of ill-defined systems New York Ple-num Press

Sheridan T and Hennessy R (Eds) (1984) Research andmodeling of supervisory control behavior WashingtonDC National Academy Press

Shneiderman B (1986) Seven plus or minus two centralissues in human-computer interaction In M Manteiand P Obreton (Eds) Human factors in computingsystems CHI86 Conference Proceedings (pp 343-349)New York ACMSIGCHL

Stefik M Foster G Bobrow D Kahn K Lanning Sand Suchman L (1985 September) Beyond the chalk-board Using computers to support collaboration andproblem solving in meetings (Tech Report) Palo AltoCA Intelligent Systems Laboratory Xerox Palo AltoResearch Center

Suchman L A (1987) Plans and situated actions Theproblem of human-machine communication Cam-bridge Cambridge University Press

Wiecha C and Henrion M (1987) A graphical tool forstructuring and understanding quantitative decisionmodels In Proceedings of Workshop on Visual Lan-guages New York IEEE Computer Society

Wiener E (1985) Beyond the sterile cockpit Human Fac-tors 27 75-90

Woods D D (1984) Visual momentum A concept to im-prove the cognitive coupling of person and computer

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission

430 -August 1988

International Journal of Man-Machine Studies 21229-244

Woods D D (1986) Paradigms for intelligent decisionsupport In E Hollnagel G Mancini and D D Woods(Eds) Intelligent decision support in process environ-ments New York Springer-Verlag

Woods D D (1988) Coping with complexity The psy-chology of human behavior in complex systems InL P Goodstein H B Andersen and S E Olsen (Eds)Mental models tasks and errors A collection of essays tocelebrate Jens Rasmussens 60th birthday London Tay-lor and Francis

Woods D D and Hollnagel E (1987) Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-basedsystems) International Journal of Man-Machine Stud-ies 26 257-275

HUMAN FACTORS

Woods D D and Roth E M (1986) Models of cognitivebehavior in nuclear power plant personnel (NUREG-CR-4532) Washington DC US Nuclear RegulatoryCommission

Woods D D and Roth E M (l988a) Cognitive systemsengineering In M Helander (Ed) Handbook ofhuman-computer interaction New York North-Hol-land

Woods D D and Roth E M (l988b) Aiding human per-formance II From cognitive analysis to support sys-tems Le Travail Humain 51 139-172

Woods D D Roth E M and Pople H (1987) CognitiveEnvironment Simulation An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862) Washington DC US Nuclear Regulatory Com-mission