the time to decide how awareness and collaboration affect the

18
Chapter 8 The Time to Decide: How Awareness and Collaboration Affect the Command Decision Making Douglas J. Peters, LeRoy A. Jackson, Jennifer K. Phillips, and Kami G. Ross Ultimately, it is the command decision, and the resulting action, that affects the battle outcome. All the processes we have discussed to this point— collection of information, collaboration, and formation of situation aware ness—contribute to the success of the battle only inasmuch as they enable effective battle decisions. Figure H.l depicts but a small part of the complex relations between actions, decisions,collaboration, situation awareness, and automation, as we observed them in the MDC2 program. Command deci sions—both the command cell's decisions and the automated decisions—lead to battle actions. These, in turn, alter the battlefield situation, bring additional information, often increase or decrease uncertainty, and engender or impede collabora tion. Changes in the availability of information lead to a modified common operating picture, automated decisions produced by the system, and further actions. These changes also lead to changes in the awareness of the battle situation in the minds of the human decision makers. Collaboration impacts the human situation awareness both positively and negatively (aswe have seen in the previous chapters), which in turn affects the quality and timeliness of decisions and actions. Still, the complexity of these relations in itselfdoes not indicate that decr- sion making in such an environment is difficult, or at least does not inform us what makes it difficult. Yet, as the previous chapters have told us, the command-cell members often find it very challenging to arrive at even a remotely satisfactory decision. Why, then, is decision making so difficult in this environment? Afterall, we provide the cell memberswitha powerful information gather ing, integration, and presentation system. We give them convenient tools to

Upload: never-what

Post on 23-Feb-2016

213 views

Category:

Documents


0 download

DESCRIPTION

Ultimately, it is the command decision, and the resulting action, that affects the battle outcome. All the processes we have discussed to this point— collection of information, collaboration, and formation of situation aware ness—contribute to the success of the battle only inasmuch as they enable effective battle decisions. Figure H.l depicts but a small part of the complex relations between actions, decisions, collaboration, situation awareness, and automation, as we observed them in the MDC2 program. Command deci sions—both the command cell's decisions and the automated decisions—lead to battle actions.

TRANSCRIPT

Page 1: The Time to Decide How Awareness and Collaboration Affect the

Chapter 8

The Time to Decide: How

Awareness and Collaboration

Affect the Command Decision

Making

Douglas J. Peters, LeRoy A. Jackson,Jennifer K. Phillips, and Kami G. Ross

Ultimately, it is the command decision, and the resulting action, that affectsthe battle outcome. All the processes we have discussed to this point—collection of information, collaboration, and formation of situation awareness—contribute to the success of the battle only inasmuch as they enableeffective battle decisions. Figure H.l depicts but a small part of the complexrelations between actions, decisions, collaboration, situation awareness, andautomation, as we observed them in the MDC2 program. Command decisions—both the command cell's decisions and the automated decisions—leadto battle actions.

These, in turn, alter the battlefield situation, bring additional information,often increase or decrease uncertainty, and engender or impede collaboration. Changes in the availability of information lead to a modified commonoperating picture, automated decisions produced by the system, and furtheractions. These changes also lead to changes in the awareness of the battlesituation in the minds of the human decision makers. Collaboration impactsthe humansituation awareness both positively and negatively (aswehave seenin the previous chapters), which in turn affects the quality and timeliness ofdecisions and actions.

Still, the complexity of these relations in itselfdoes not indicate that decr-sion making in such an environment is difficult, or at least does not informus what makes it difficult. Yet, as the previous chapters have told us, thecommand-cell members often find it very challenging to arrive at even aremotely satisfactory decision. Why, then, is decision making so difficult inthis environment?

Afterall, we provide the cell memberswitha powerful information gathering, integration, and presentation system. We give them convenient tools to

Page 2: The Time to Decide How Awareness and Collaboration Affect the

ActionsGround

Truth

^r Sensor Loss

System Predefined Decisions

Knowledge ^\ (Algorithm settings)

Tin-Tunc 111 Decide |»tf

Key;

—• Information provWotf?.^

•4r Information lost

--> Loads to... ^-'

BCSE

Action

Figure 8.1. Commander Decision Environment—complex relationsbetween actions, decisions, collaboration, situation awareness, unilautomation. See Appendix for explanation of abbreviations.

examine the available information and to explore its meaning in eollnhurfl:tion with otherdecision makers. The CSE system offers many automntit'iillygenerated decisions, such as allocation and routing of resources for lire unciintelligence collection tasks. The cell has established effective procedure lorallocation and integration of decision-making tasks. Yet effective dcrinintimaking continues to bea challenge, inspite of all these aids.

One highly visible culprit is the lack ofusable information: inmmplen'iiHMiof battlespace information, doubts about the reliability of the available inhumation, and uncertainty about the likelihood of a decision's consequence nrabout the utility ofthe respective alternatives. Atheorist ofmilitary commandargues thatthelack ofinformation and its uncertainty are the mosi mipoi Milldrivers of command: "The history of command can be undcrsiond in k.mimh

Page 3: The Time to Decide How Awareness and Collaboration Affect the

i vo i>;iiiic or v.o^iiuion

of a race between the demand for information and the ability of commandsystems to meet it.The quintessential problem facing any command system isdealing with uncertainty" (van Creveld 1985).

Another major source of challenges involves the limits on the rationalityof human decision makers (Simon 1991). Such limitations are diverse: constraints on the amount and complexity of the information that a human canprocesses or acquire ina given time period and multiple known biases in decision making. In particular, time pressure isa well-recognized source oferrorsin human decision making—as the number of decision tasks per unit timegrows, theaverage quality ofdecisions deteriorates (Louvet, Casey, andLevis1988). In network-enabled warfare, when a small command cell is subjectedto a flood of information much of which requires some decisions, the timepressure can be a major threat to the quality of decision making (Kott 2007).Galbraith, for example, argued that the ability of a decision-making organization to produce successful performance is largely a function of avoidinginformation-processing overload (Galbraith 1974).

I Iuman decision-making biases are surprisingly powerful and resistant tomitigation. Many experiments demonstrate that real human decision making exhibitsconsistent and pervasive deviations (often termed iwmdoxvs) Iromthe expected utility theory, which for decades was accepted as a normativemodel of rational decision making. For example, humans tend to prefer thoseoutcomes that have greater certainty, even if their expected utility is lowerthan those of alternative outcomes. For this reason, it is widely believed thatbounded rationality is a more accurate characteri/.ation of human decisionmaking than is the rationality described by expected utility theory (Tverskyand Kahneman 1974; Kahneman and Tversky 1979). The anchoring andadjustment biases, for example, can be very influential when decision makers, particularly highlyexperienced ones, follow the decisions made in similarsituations in the past (naturalistic decision making | Klein I999|).

Althoughsuch biases can be valuable ascognitiveshortcuts, especially undertime pressure, they alsoare dangerous sourcesof potential vulnerabilities. Forexample, deception techniques are often based on the tendency of humandecision makers to look for familiar patterns, to interpret the available information in light of their past experiences. Deceivers also benefit from confirmation bias, the tendency to discount evidence that contradicts an acceptedhypothesis (Hell and Whaley 1991).

With a system like CSF, one might expect that biases are at least partiallyalleviated by computational aids. Decision-support agents like the AttackGuidance Matrix that we discussed earlier can greatly improve the speeil and*accuracy of decision making, especially when the information volume is largeand time pressure is high. But they also add complexity to the system, leading to new and often more drastic types of errors, especially when interactingwith humans (Perrow 1999).

Additional challenges of decision making stem from other factors, such associal forces within an organization, which go beyond the purely information-

Page 4: The Time to Decide How Awareness and Collaboration Affect the

I he l line to I JiTiile V*f

processing perspectives. I''or example, [rrotijUhink'the tendency nl deitaiunmakers within acohesive group to pressure each other toward iiiiilnmiiiy iintjagainst voicing dissenting opinions (Janis 1982)—can produce ciitiiNiruphltifailures ofdecision making. Indeed, our observations of failures on coniDMMJlcells' collaboration point to possible groupthink tendencies, particularly il!view of the fact that information overload encourages groupthink (Jnnlfl1982, 196).

Which of these factors, if any, impact the decision making in network*enabled warfare, and to what extent? How much can a system like Ihe C'.Sl^alleviate or perhaps aggravate such challenges to human decision inilklnfjr"1-As a key part of the MDC2 program, we sought to evaluate the ability fl|'the command-cell members—commanders and staff to make dTetjIlVttdecisions in the information-rich environment of network-enabled wnrl'WflfUnderstanding the decision-making process of the commanders, the \\V®of automated decision aids, and the presentation of critical information (Wdecisions were crucial to this evaluation.

We begin this chapter byexploring how wecollected information to.fiup**port our decision-making analysis throughout the experimental |>r0g»'9tll»This section chronicles notonly the progression of the approaches wC ((,)()]$*but also what we learned from the methods themselves and how they W(*Pd,adapted to yield a richer set ofdata. We then proceed todiscuss some oflh^lessons learned in the analysis of thedata and theirpotential relevance to lll8'development of future command tools.

COLLECTING THE DATA ABOUT DECISION MAKING

A key effort in the MDC2 experiments was to devise mechanisms for ihi|)«turing the data about decision making. We found it remarkably challengingto obtain the data that would give us the desired insights. In a trinl-umFriTOffashion, we proceeded through a number of approaches.

'lb begin with,webuilt automated loggersthat captured an enormous qmill»tity of data for each experimental run. For example, CSE automated delusions and contextual decision-making information (such as the SAi curves)were available directly from the data loggers. However, the raw data Imillthese loggerswere of limited direct use in evaluatinghuman decision makingbecause they could not quantify the commander's cognitiveprocesses and hliiunderstanding of the situation. At best, they were helpful to support lindiiitytiand to understand what was happening during critical decisions.

In addition to automated data logging, five other mechanisms were used tocollect decision-related information: analytic observers, focus groups, operator interviews, surveys, and battlesummary sessions. These were developedand refined throughout the experimental campaign, especially over the l.i'.lfive experiments in the campaign, beginning with Experiment 4a.

In Experiment4a,weemployed several traditional toolsof the analyst s loiifbox. During the experiments, analytic observers recorded significant cvcnKi

Page 5: The Time to Decide How Awareness and Collaboration Affect the

iiiuiiv.<ii v.wliiiiiwii

and characterized die effectiveness of tlie battle-command environment with

respect to the commander's ability to execute his mission. Within our cadreof observers, one person was dedicated to record and classify every decisionthat the commander verbalized. Each decision was identified as relating toseeing (for example, repositioning sensors or classifying imagery), striking(for example, when and where to place fires), or moving (for example, how toarray the forces for movement).

Additionally, each decision was characterized according to the associatedcomplexity. We classified decisions that were prompted by a clear trigger andappeared to be made according to a small set of understandable rules as automatable decisions. Examples of automatable decisions were "Fire at that tank"and "Let's get BDA (Battle Damage Assessment)on that engagement."

Those decisions that were based on a well-understood and limited set of

variables but required a degree of human judgment not reducible to well-understood rules, were classified as adjustment decisions. An example of anadjustment decision was to determine when the necessary conditions are satisfied to begin operations.

Finally, decisions that required a broad, holistic understanding of the situation, encompassinga wide range of variables, and that fundamentally changed(or confirmed) the entire operation's strategy were characterized as complexdecisions. An example of a complex decision from Experiment 4a: the commander identified a deficiency in his plan and saw the need to develop contingency plans: "If the enemy gets into Granite Pass, it is going to be verydifficult for us to get through him. We need to look at some other maneuveroptions."

Collecting this information allowed us to characterize the decision makingin a numberof ways. Analyzing the types of decisions madebythe commanders, we identified the battle ftinctions on which the commander focused mostof his attention. Likewise, decision complexity characterizations helped usbetter understand whether the commander was making decisions that couldbe automated with a tool or making frequent complex decisions. 'Ibgether,these two characterizations enabled us to identify specific areas of the CSEthat could better be tailored to support the decision maker's needs. Figure 8.2shows a partial analysis of decisions by type from Experiment:4a.

Surveys, on the other hand, proved much less useful. After each experimental run, we asked each commander to complete a survey—his assessmentof how well the run went and what challenged him during the run. Thesesurveys, while containing occasional nuggetsof interesting information werelargely ineffective because the questions were not specific to the events of'agiven run, and because beingthe lasteventofa longday, surveys did not elicitsufficiently detailed responses from the fatigued commander and his staff.

Overall, although the decision characterization was useful to help improvethe functions CSE, it did not tell us much about the effectiveness of thedecisions, or about the specific information and conditionssupporting effective decisions. Therefore, in Experiment 4b (a repeat of the Experiment 4a

Page 6: The Time to Decide How Awareness and Collaboration Affect the

Adjustment

37

49

35

121

Complex

1

Q

g

iiwwwi»m urn

lolitl

ut

•u\

\11_

All articulated choices were recorded nn dnclhlnnn173 decisions were observed over H record ruimOf 32 Automatable-See decisions:

• 13 involve sensor allocation and positioning• 5 involve changes, to the nctivo sonsor modn• 11 involve cross-cueing different aenaon,• 3 involve micro-UAV use to enhance I IDA.

Decision Types• Automatable — all variables known or can be calculated, somolhlnu

computer can do (25%).• Adjustment —mostly known variables within the plan context, roqulrw

human judgment (70%).• Complex— requiresdefinition of options, criteria and decision procoiw

(5%).

Decision Focus and Content• Move — the movement of organic assets (25%).• See — the development of the intel picture (47%).• Strike — the application of effects (28%).

Figure 8.2 Experiment 4a summary of decisions by type and complexity,

with a different, less-experienced team ofoperators), we ;uUU-(\ ;i quallmtjvgassessment of decisions. Wc conducted this assessment in pen K|)i rllill tilanalysis sessions with die help of military subject matter experts nlin wall Iting a replay ofthc battle and events leading to the decision in qiiaillnn II"following criteria, derived from the Network Centric ()peralions I!oni i|)1111!Framework (Evidence Based Research 2003), were used to evalunic thl qtlfllity of a decision as follows:

• Appropriateness: consistency of the decision with situation siwsireiiCNH (till ilfttfllion as was known to the decision maker at the moment), mission nbjoi tlV) •••».!commander's intent.

• Correctness: consistency of the decision with ground truth (i.e., with Ihi It'lttfllsituation).

• Timeliness: whether the decision is made within the window of opportunity

• Completeness: the extent to which all die necessary factors are consider) (I 111 mill "ma decision.

• Relevance: the extent to which the decision is directly related I Morion

Page 7: The Time to Decide How Awareness and Collaboration Affect the

200 Battle of Cognition

• Confidence: the extent to which the decision maker is confident in a decisionmade.

• Outcome consistency: the extent to which the outcome of the decision is consistentwith the intended outcome.

• Critieality: the extent to which the decision made is critical to mission success.

Although this approach provided us with extensive data on the quality ofthe decisions (e.g.,sec FigureS.3), it alsoproved to be of limited use. Withoutdetermining the context and the reasons for a decision, and the informationthat led to the decision, wecould not pinpoint ways for the CSE to improvethe decision-making environment.

In addition to the study of decision quality, we introduced another data-collection approach that showed its initial promise in Experimental) but thenbecame a core analytic tool for later experiments. Process tracing (Sbattuckand Miller 2004) examines a single episode of an experimental run in detail.'Ibis methodology connects collaboration to changes in SA and SA to decision making with a focus on the operators and (heir use ofthc CSE. Processtracing externalizes internal processes (Woods IW.l) and tries to explain thegenesis of a decision by mapping out how an episode unfolded, includinginformation elements available to the operators, what information was notedby operators, and operators' interpretations ofthc information in immediateand larger contexts.

In Experiment 4b, we completed process tracing for a single event, andalthough we were unable to draw any significant insights from one event, themethodology showed promise for understanding both the contextof a decisionand the challenges that faced the decision makerat the time ofthc decision.

Run 8UodofCAUl

lUtio Losses

Hvmif^T' *\ OAUIofKod

•%?

y

\ t \Slow growth R) SA,

60 W)I bih.' (minutes)

Note: Only losses ol k(sy UAVassets find manned systemsam hKjhlJrjhted with names

KM) 120 140

Figure H.3. Expert assessments considered the correctness, timeliness,relevance, and other characteristics of decisions.

Page 8: The Time to Decide How Awareness and Collaboration Affect the

The Time to Decide 201

With the introduction of a manned dismounted platoon for Experiment 5,the complexity of the decision-making environment increased significantly.Now, instead of communicating his thoughts and decisions to staffmemberslocated in the same vehicle, the commander had to convey his intent andorders to subordinate commanders reachable via the radio and shared displays, with sufficient clarity and detail. Ouranalysts also examined information requirements for warfighters conducting dismounted operations.

The process-tracing techniques were well suited for this complex environment, and we focused on identifying key decisions during each run and analyzing those decisions in detail. The detailed process tracing combined videoand audio playback of events leading to a decision, audio logs ofthc communications, query results from the automated loggers, the SAi curve, observernotes,and interview records. All thesecomponents together supported a verydetailed study of short-duration events.

'lb facilitate these process tracings, we compiled critical information intoa single source. By plotting different typo of information across a commontime axis, we were able to show what was happening at various time pointsduring the battle. Hecausc these charts were developed by slacking multiplevariables against a common lime axis, we referred to these composite views as"lacked charts. An example is shown in Figure 5.10. This particular slackedchart was developed to help ussimultaneously view decision making, collaboration, information availability, and battle tcmpo*data. The relations betweenthese elements helped us understand what events shaped a key decision.

Of particular value in this methodology is a technique for extracting critical information through interviews. (liven our earlier lack of success withend-ol-run surveys, wc were eager to try a technique that would allow us toidentify details of critical decisions. The critical decision method of interviewing (Klein, Calderwood, and MacCregor 1989) uses a two-person teamto identify a single decision made during a run and explore it in detail. Thereare four steps to this interviewingtechnique:

Step I is incident identification. The interviewer presents a situationor a critical eventami asks the decision maker to talk about the event from his perspective with aparticular focus on the role he played during the event. The interviewer does notinterrupt the interviewees with clarifying questions (these come later in Step 3),and the interview learn takescareful notes regarding die actions and decisions madeduring the event.

Step 2 establishes the timeline of the event. The interviewer repeats the story hackto the interviewee with special emphasis on the liming of events and decisions.Through this process, the interviewer becomes familiar with the subcomponentsand timing of events, and how they impacted the outcomes and decisions made.Special attention is paid to decision points, shifts in situation awareness, gaps, andanomalies.

Step3,deepening, tries to uncover thestorybehind thestory. Heremost ofthedetailedinformation becomes apparent—why thingswere done as they were,why decisions

Page 9: The Time to Decide How Awareness and Collaboration Affect the

wereor were not made,what informationand experiential components contributedthe most. This stage uses the event timelineand explores it in detail. Anomalies orgaps in the story are investigated during this phase.

Step 4 focuses on the what-ifqueries. The purpose of this step is to consider whatconditions may have made a critical difference in how the situations unfolded andin die decisionsthat were made. It alsoasks the questionof what a less-experiencedperson may have done in the same situation to further draw out the subtle factorsthat enable the interviewee to make effective decisions.

Because both the process traces and the interviews proved to be effectivein Experiment 5, Experiments 6 and 7 built on these analytic tools and introduced two additional tools.

The first additional tool -a detailed timeline of a run-became necessarydue to the increased complexity andduration of runs. Although we had extensive and detailed records ol what happened duringeach run (including videoandaudio recordings), the lask of producing a unified, concise description ofwhat happened duringa run was difficult after theexperiment was complete.Therefore, after each experimental run, a group of analysts who had closelyobserved the various echelons and cells (friendly and enemy) wrote a shortbut complete synopsis ofthc run. In the synopsis they were able lo captureconcisely the How ofthc battle and detail the most significant events ofthcbattle from both the Blue and Red perspectives.

The second tool we introduced in the later experiments was focus groups.Organized lor each command cell, a focus group session was relatively short(less than one hour) and was facilitated by a member ofthc core analysis teamwho observed that cell during planning and execution. The facilitator beganthe locus group session with candidate decisions of interest identified by theanalysis team during or immediately after the run. A recorder took notes.After the focus group session, the facilitator or recorder briefed the entireanalytic observer team on key findings.

At the focus group sessions, wc tried to understand the battle in general, andthe key events specifically, from the perspective ofthc operators. Facilitatorsused the following questions to guide the focus group and to ensure that allmembers participated in the session.

• Ask the operators losummarize the battle from their perspective. Uriel'back the keyelements of the bailie summary. Use ihe operators words lo the maximum extentpossible. Introduce the decisions of interest, placing them in the context of thebattle summary.

• Ask the operators to describe the events that led to a specific decision. Listen fordecision points, collaborations, shifts in situation awareness, gaps in the story, gapsin the timeline, conceptual leaps, anomalies or violations of expectations, errors,ambiguous cues, individual differences, and who played the key roles. Ask clarifyingquestions and then brief back the incident timeline.

• Ask those operators who played key roles questions about situation assessment andcues. Listen for critical decisions, cues and their implications, ambiguous cues, strategies,

Page 10: The Time to Decide How Awareness and Collaboration Affect the

I he lime l<> Decide .MM

anomalies, and violations of expected behavior with respect to the commander'sintent.

• Ask operators to describe CSE-related issues. Ask probing questions as necessary.What worked well? What features helped their situation awareness? What featuresdid you use to collaborate? How did you use automated decision-support functions?What did you not use and why? What would you automate?

• Ask operators to describe procedure-related issues. Ask probing questions as necessary. What responsibilities were assigned to each operator? What tools were associated with the assigned responsibilities? When were the operators overwhelmed withthe work-load? Ilow did the commander adjust staff roles during the mission? Whatnew procedures did you implement and why? What did you struggle with? Why didvou use a certain procedure?

The combination offocus group and OTA interviews along with the otherquantitative data logs gave us an ability to reconstruct the battle, to examinehow decisions were made, and lo identify issues that may affect battle command in the future force. The following.sections describe some ofthe resulting conclusions.

THE HEAVY PRICE OF INFORMATION

Because the notional future force represcntctkin ourexperimental programwas heavily armed but lightly armored, availability ofinformation was exceptionally critical lo mission success. The cost ofstumbling upon an undetectedenemy asset was inevitably the loss ofacritical piece ofequipment. Ilowever,ifthecommander could \nu\ theenemy, becould use his precision weapons toengage the enemy at great distance. In order to find the enemy at long range,the force was equipped with a rich set ofsensor platforms. The sheer number of sensors, along with the well-understood importance of inhumationabout enemy assets, led the commander lo focus more attention on information needs than is common for commanders of todays forces. This additionalemphasis on information was not only the result ofthc increased ini\wnaiice ofinformation but also due to the increased availability of information.

Because even a single enemy entity could have a major imparl on ibislightly armored force, our commanders focused much oftheir atlcniion onintelligence gathering regarding individual enemy platforms, in ;iddiin»n tothe more conventional tasks of aggregating information on enemy l«.i millions and possible enemy courses ofaction. Our commanders needed to knowwhere individual enemy entities were and, just as importantly, where iluywere not. In addition, they paid attention to the classification ol ilrin i<<\entities and the condition of targets after they were engaged BDA. In l;i<t,the commander's strong focus on "seeing the enemy" at the expense ol otherfunctions became obvious when we analyzed the content of his de<•p.hiiis.For example, in Experiment 4a, almost half of the decisions verhali/ed bythe command-cell members were characterized as see decisions (tin- <»ihei

Page 11: The Time to Decide How Awareness and Collaboration Affect the

204 Halllc ol Cognition

common types, move and stiike decisions, accounted for about 25% each; seeFigure 8.2).

Still, the commanders in our experiments tended to delegate the entity-based information-gathering responsibility to the intelligence manager. Thishelped devolve a substantial cognitive load from the commander and alsoserved to unify control of the sensor assets. On the other hand, this delegation deprived the cell of the critical big picture of the enemy since theintelligence manager was focused on finding and characterizing individualbattlespace entities instead of developing an aggregated understanding ofthe enemy.

In Experiment 6, one of the commanders recognized this deficiency andsaw that bis intelligence manager was overloaded with tasks, while the effectsmanager was being underutilized (since many ofthc engagement tasks wereautomated orassisted by the (!SE)- Thecommander made the effects managerresponsible for coordinating with the intelligence manager to obtain imagesfor BIM and to conduct BDA assessments. The advantage of placing thisresponsibility with the effects manager was obvious—not only did it alleviatethe cognitive load placed on the intelligence manager, but it also enabled arapid recngagcmenl ofassets that were not destroyed by the original engagement. In general, the flexibility ofCSE facilitated opportunities for creativeand unconventional allocation (and dynamic reallocation during the battle) oiresponsibilities between members ofthc command cell.

BDA proved to be particularly critical and demanding throughout the experimental program,ami commanders struggled with ob^from their available images. More often than not, BDA images (producedwith realistic imagery simulator) did not provide enough information tomake definitive conclusions about the results of an engagement.Thus, about90 percent ofBDA images from Experiment 4a were inconclusive (Figure 6.5ofChapter 6). This ultimately led to frequent recngagements of targets inorder toensure they were destroyed. In Experiment 4a, 44 percent oftargetswere reengaged, and in Experiment 4b, 54 percent were reengaged.

The need to understand the state of enemyentities through effective BDAwas clearly demonstrated in Experiment 4a, Run 6, where a single enemyarmored personnel carrier destroyed enough of the Blue force to renderthe unit combat ineffective. This particular enemy entity had been engagedearly in the battle and suffered a mobility-kill. 1Iowever, the intelligencemanager classified the asset as dead based on a BDA picture This mistakewas not found until it was too late. The Blue force was unable to continueits mission.

Undoubtedly, tomorrows commanders will greatly benefit from the richinformation available to them. At the same time, they will be heavily taxedwith the need to process the vast information delivered through networkedsensors—both initial intelligence and BDA. Commanders should expect tospend more time, perhaps over half of their time, on "seeing" the enemy.Partofthc solution isto equip them with appropriate information-processing

Page 12: The Time to Decide How Awareness and Collaboration Affect the

I he I line lo Decide .MM

tools. In addition, the staff responsibilities should be conliiuially reevaluatedand reallocated to ensure that all critical duties arewell covered.

ADDICTION TO INFORMATION

Information can be addictive. We often observed situations when commanders delayed important decisions in order to pursue an actual or perceivedpossibility of acquiring additional information. The cost of the additionalinformation is time, and lost time is a heavy price to pay, especially for thefuture force that relies on agility.

As with today's commanders, uncertainly is present in all decisions, anddecisions are often influenced by aversion to risk in the presence of uncertainty. Unlike today's commanders, however, our commanders had the toolsreadily available to them to further develop their information picture Theycould 'reduce their uncertainly by maneuvering sensor platforms into positionto better cover a critical area. This availability of easy access to additionalinformation was adouble-edged sword because itoften slowed the Blue forcesignificantly. Commanders commonly sacrificed the speed advantage of theirlightly armored force in order to satisfy their perceived need for information.These delays enabled the enemy to react lo an assault and move to positionsof advantage

An example of this occurred in Experiment ,4a, Run 8, where the commander incorrectly assessed that the enemy had asignificant force along theplanned axis of advance Even after covering this area several times with sensors and not finding many enemy assets, the commander ordered ". . . needto slow down a bit in the'north . . . don't want you wondering in there." Atthis time in the battle, the average velocity ofmoving Blue platforms droppedfrom 20 kni/h to 5 km/h. The commander exposed his force to enemy artillery for the sake of obtaining even more detailed coverage ofthc area.

On the other hand, commanders also frequently made the opposite mistakewhen they rushed into an enemy ambush without adequate reconnaissance.An example of this occurred in Run 8of Experiment 6where several criticalsensor assets were lost early in the run, and the CAU-1 commander quicklyoutran the coverage ofhis remaining sensors. In cases like this, the commanderwas lulled by the lack of enemy detections on his CSE screen and advancedwithout adequate information—perhaps perceiving the lack ofdetections assufficient information to begin actions on the objective This event is discussedin detail in the following section.

Today's commanders are often taught that the effectiveness of adecision isdirectly related to the timeliness of the decision. However, while timelinesswill remain critical, tomorrow's commanders will need to pay more atlentionto the complex trade-offs between additional information and decision timeliness. Effective synchronization of information gathering with force maneuver is a formidable challenge in information-rich (and therefore potentiallyinformation addictive) warfare Both specialized training and new tools are

Page 13: The Time to Decide How Awareness and Collaboration Affect the

206 Battle of Cognition

Slow growth in SA t

20 40 60 80Time (minutes)

Red of CAU1

N CAUIofRed

Note: Only losses of key UAVassets and manned systemsare highlighted with names.

100 120 140

Figure 8.4. SAtcurve for Experiment 6, Run 8. See Appendix for explanation ofabbreviations.

required to prevent the failures that commanders experienced so often in ourexperiments.

THE DARK SIDE OF COLLABORATION

Effective decision making can also be delayed and even derailed by collaboration. In certain cases, we observed a commander's understanding of thecurrent Blue or Red disposition degraded as a result of collaborations withsubordinates, peers, or higher headquarters commanders. Unlike in chapter 7where we discuss cases of ineffective collaboration, here collaboration itselfwent well. However, the effects of the collaboration on a commander's decisions were highly detrimental.

Run 8 of Experiment 6 provides an interesting example of how collaboration can lull a decision maker into complacency byvalidating incorrect conclusions. In thisrun, the CAU-1 commander's force was destroyed byastrongenemy counterattack. Figure 8.4 shows the SAtcurve for Run 8 with an overlayof time points when Blue entitieswere destroyed. At 32 minutes into thisrun (vertical dashed line), the CAT commander assessed that the enemy wasdefending heavyforward (i.e., mainly in the CAU-2 sector).

Several minutes later, the CAU-2 commander seemed to confirm that assessment withhis report "I suspect [theenemy's] intent is to defend heavy forward[in CAU-2 sector]." This assessment was derived from several detections madevery early in the run, The figure shows that little new information about theenemy is acquired before the CAU-1 commander announces that "I'm not see-

Page 14: The Time to Decide How Awareness and Collaboration Affect the

The Time to Decide H)7

ing any counterattacking forces moving towards us [i.e., CAU-1]. I think themajorityof die enemy force is in [CAU-2's] sector" at 52 minutes into the run.

This would be a reasonable conclusion if he were using his sensors lodevelop the picture of the enemy, but in fact CAU-1 had focused his sensor*,on his flank and did not have any sensor coverage in the area where he wa.-.moving his troops. Soon thereafter, CAU-1 stumbled into a major Red connterattack force and was combat ineffective within minutes.

So, the obvious question is, why did the CAU-1 commander not makemore effective use of his sensors? Certainly, one important factor was a tactical blunder early in the run that led to the destruction of several keysensor assets, leaving him with fewer sensors to conduct his mission. With diisreduced set of sensors, the commander had to protect his flank, scout forwardto the objective, and conduct necessary BDA. At 44 minutes into the fight,the commander tasked his staff member to reposition the sensors to scout theobjective but was distracted by the collaboration with a staff member whodeclared that he had found several enemy assets far to the west.

Because of this collaboration, the commanderneglected his intended mission of covering the area aheadof his force and began focusing attention farto the western flank of the advancing force. Yet, less than 10 minutes later,and with no new information about the objective, the commander was secureenoughin hisassessment that he began hisoffensive and was met witha majorenemy counterattack force that decimated his unit.

There were several reasons for this poor decision to begin operations without conducting proper reconnaissance. The collaborative assessment of thesituation with CAU-2 commander and with CAT commander led the CAU-1

commander to expect few enemy forces in his zone. Later, the commander'scollaboration with a staff member confirmed his erroneous understandingthat the enemy force was far from his zone.

Though this was a rather extreme example of a collaboration negativelyaffecting decision making, there were many other examples throughout theexperiments that showed collaborations either distracting the commanderfrom making critical decisions or lulling him into accepting an incorrectunderstanding of the battlespace. In fact, of sevencollaboration processtraceschosen for detailed analysis in Experiment 6, only three cases of collaboration yielded improved cognitive situation awareness for the operators. In theremaining four cases, collaborationdangerously distracted the decision makerfrom his primary focus or reinforced an incorrect understanding of the current Red or Blue disposition.

Consider that commanders in our experimentswere equipped with a substantial collection of collaboration tools—instant messaging, multiple radiofrequencies, shared displays, graphics overlays, and a shared whiteboard.Although the commanders took full advantage of these tools and found themclearly beneficial, there was also a significant cost to collaboration. To minimize such costs, future command cells will need effective protocols—and

Page 15: The Time to Decide How Awareness and Collaboration Affect the

208 Battle of Cognition

correspondingdiscipline—for collaborating: howoftenandunderwhatcircumstances collaboration occurs, with what tools, and in what manner.

AUTOMATION OF DECISIONS

Commanders and staffs used automated decisions extensivelyand could usethem even more. However, the nature of these automated decisions requiresan explanation. In effect, the CSE allowed the commander to formulate hisdecisions before a battle and enter them into the system. Then, during theoperations, a set of predefined conditions would trigger the decisions. Thus,the decisions were actually made by the commanderand staff. It was only theinvocation and execution of these decisions that was often performed automatically when the proper conditions were met.

One type of such automatically triggered decision was the automated fires.The conditions for invoking a fire mission included staff-defined criteria forconfidence level, type of target, the uncertainty of its location, and target-acquisition quality. Recall that in chapter 3 we discussed the Attack GuidanceMatrix (AGM), an intelligent agentwithin the CSE that identified enemy targets and calculated the mostsuitable ways to attack them with Blue fire assets.It could alsoexecute fires; for example, it could issue a command to an automated unmanned mortar to fire at a particular target, automatically or semi-automatically, asinstructed by thehuman staffmember. Typically, acommanderor an effects manager would specify the semiautomatic option: the AGM recommended the fire to them and would execute it only when a command-cellmember approved the recommendation. Occasionally, in extreme situations,theywould allow fully automated fires, without a human in the decision loop.

Another similar type of automated decision making was an intelligentagent for automated BDA management. This agent used the commander-established rules to determine which sensor asset was the most appropriateto conduct BDAand would automatically task that asset to perform the BDAassignment. For example, it would automatically command a UAV to collectinformation about the status of a recentlyattacked target. Such decisions weremade based on the specified criteria regarding the available sensorplatforms,areas of responsibility, andenemy assets to beavoided.

In each experiment, we found thatcommand-cell members used the automated fires feature effectively and frequently. Commanders and effects managers spent ample time prior tothebeginning ofbattle defining theconditionsfor automated fires. During the runs, these settings were rarely changed andalmost eveiy run had instances of automated engagements of enemy assets.

However, therewere also many manual engagements that could have beenautomated but weren't. Instead, a cellmember would manually identifya Redtarget, select a Blue fire asset and suitable munitions, and dien issue a command to fire—overall, a much more laborious and slower operation than asemiautomated fire. One reason for preferring such manual fires was thatit often took too long to accumulate enough intelligence on an enemy tar-

Page 16: The Time to Decide How Awareness and Collaboration Affect the

I IK' I Hill l<> I Ml llll

get to meet the preestablished criteria for an automated or scmiaulomaiedfire decision—they had to be fairly general and therefore too stringent. I'orexample, since in our experimental scenarios there were relatively few civiliantracked vehicles in the battlespace (a bulldozer being an obvious exception),the effects manager would often engage any vehicle classified as tracked evenbefore there was a clear indication that it was an enemy asset. At the sametime, he was hesitant to allow automatic fires on all tracked targets.

In such cases, a manual engagement was intentional, but in other cases,thestaff wondered aloud why anenemy vehicle was not being engaged. Totheeffects manager's eye, the specified conditions were apparently met, and theAGM should have initiated a fire event when in fact the situation had not metthe full setof the prespecified trigger conditions. The staff's puzzlement overwhy an automated fire was not happening had an adverse affect. Because theCSE was not performing as expected by the effects manager, his confidencein thecapability of the tool diminished. Unable to understand why theAGMrefused to fire, the effects manager tended to apply simple and very specificrules sothatonly the most critical targets were automatically engaged.

The automated BDA tool suffered from diis lack of understanding, whichlead to a lack of trust, much more so than with the AGM. One would thinkthat the seemingly less critical and, nonlethal nature of BDA would lead tomore ready acceptance by the operators. After all, the automated BDA toolwas developed at the request of commanders in an early experiment wherethey routinely tasked a UAV to take a picture ofengaged Red assets. The commanders felt that if this taskwasautomated, not only would it lighten the loadof the staff, but it would also ensure that the task was conducted in a timelyfashion. This seemed like an obvious task to develop effective rules, and theCSEdevelopers set to work automating these seemingly obvious BDA tasks.

The solution worked exactly as expected by the tool designers and by thecommand staffwho originally requested the automation. Unfortunately, thenew command-cell members participatingin the next experimenthad ratherdifferent expectations. Early in theexperiment, theyused the automated BDAtool and became utterlyconfused. The information manager controlling theUAVs would wonder aloud, "Who is moving my UAV?" and "Where is thatthing going now?"

What was originally designed to lighten the load of the command cellquickly turned into a perceived loss of control overcritical assets. The automated BDA tool became available in Experiment 4b, and in each subsequentexperiment, commanders andtheirstaffs began byusing the functionality butthen quickly abandoned it because of the perceived loss of control.

So, what decisions can and should be automated? Why was the automatedfires capability well received while the automated BDA was not? Based on ourexperience, we believe the difference conies down to the following considerations.

First, the commanders and staff must trust the system. Not only must thesystem be reliable enough to work as expected every time, but it must also

Page 17: The Time to Decide How Awareness and Collaboration Affect the

210 Battle of Cognition

be simple enough for the operators to understand when it will act and whenit won't. In particular, there must be a very clear and easily understandabledistinction between the computer control and humancontrol.

For example, in case of the automated fires, it was very clear whether thehuman or the computer was to make the final decision, and once a munition was launched, there were no opportunity for—or confusion about—thecontrol. However, in case of the BDA management, there was continuousuncertainty about who was in control of a given platform—a human or acomputer—and the information manager had no means to collaborate withthe system to answer his questions about control.

Second, it should be easy for the operator to enter rules that govern anautomated decision-making tool. For example, it may initially seem obviousto the developers of an automated tool to call for fires on detected enemytanksas soon as possible. However, when low on ammunition, a commandermight want to fire only at those tanks that are able to affect his axisof advance.Likewise, hemay not want toautomatically engage tanks near populated areasor if a civilian vehicle was spotted nearby. The more rules and tweaks, theharder it is to understand the decisions made by the tool and the sooner anoperator will build distrust when the tool does not perform ashe expects.

Naturally, other nontechnological factors also affect the extent to whichautomated decisions will be available to a future force. Perhaps our commanders accepted the automated fires so easily because the experiments weremerely a simulation: the consequence of a wrong automated decision wasthe destruction of computer bytes and not of real people. In today's practice,a human is personally accountable for every fire decision, and great care istaken to avoid accidents. With any automation of decisions related to eitherlethal fires or to any other battle actions come many challenging questionsabout responsibility and accountability.

THE FOREST AND THE TREES

Decision making can suffer from an excessive volume of detailed information offered by the network-enabled command system. In our experiments,we observed several mechanisms by which the richness of information negativelyimpacted the decision making.

First, recall that all operators' displays were tied to the same underlyingdata source.Therefore, soon after an enemy assetwas detected, everyscreenof every command-cell member in every command vehicle would show thisnew information. At first glance, this seems to be exactly the right behaviorof the system, and the operators indeed desired to see all such information.And yet, this faithful delivery of detailed information proved to be a majordistraction to the cell members' decision making, especially to commanders.Instead of focusing on understanding the enemycourse of action and how tobest counter likelyenemy actions, commanders becamemesmerized with thescreen, hunting for changes in the display and reacting to them.

Page 18: The Time to Decide How Awareness and Collaboration Affect the

The Tiino to Decide 211

This so-called looking-for-trees behavior had at least two very adverseimpacts on the commander's ability to understand the battlespace. On onehand, the commander gravitated to a reactive mode: he responded to changeson his display and frequently lost the initiative in the battle. This was especially true when inadequate sensor management led to detections of enemyassets outside of the truly critical areas of the battlespace. In such cases, thecommander's fixation on the screen led him to focus on largely irrelevanttopicswhile losing the graspof main events in the unfoldingbattle.

On the other hand, responding to frequent updates on the screen prevented the commander fromspending the necessary time thinkingabout thebiggerpicture of the situation.For example, in Experiment4b, wenoticed theexcessive frequency with which the commander shifted his attention. He wasalmost constantlyscanningthe display for newinformation, moving his cursor from one entity to another to determine if new information was available,and reacting to the appearanceof an enemy icon or alert boxon the screen. InRun 4, he shifted his attention 26 timesovera 13-minuteperiod—an averageof once every 30 seconds. During a 16-minute period in Run 6, he shifted hisattention 60 times, for an average dwell time of about 16 seconds.

The implications of this frequent attention shiftingare interesting and disturbing. The more often a decision maker shifts attention, the shorter thedwell time on a data element, and the more shallowthe cognitive processing.The decision maker may determine, for example, diat an enemy vehicle hasbeen detected and may decide how to react to it. Then he shifts his attentionto another change in his screen, without having enough time to reason aboutthe broader issues—the implications of the detection of that type of vehicle atthat place in the battlespace.

Furthermore, the commander would often "drag" the other cell members along with him as he shifted attention—announcing the updates he wasnoticing or issuing reactive tasks such as "DRAEGA just popped up, let's geta round down there." Such unnecessary and counterproductive communications about the newly arriving information were depressingly common.For example, in Experiment 6, as die commander watched on his screen thereports of Red artillery rounds landing around one of his platoon leader'svehicle, he felt compelled to keep announcing this fact to the beleagueredplatoon leader.Ofcourse, the platoon leaderwaswellaware that he wasunderfire, and the commander's communications only served to distract him.