a real environment, user oriented, evaluation of the ... · a real environment, user oriented,...

76
A Real Environment, User Oriented, Evaluation of the Different Menus in Cetris EVELINA HEDSKOG Master of Science Thesis Stockholm, Sweden 2009

Upload: duongbao

Post on 06-May-2018

218 views

Category:

Documents


3 download

TRANSCRIPT

A Real Environment, User Oriented, Evaluation

of the Different Menus in Cetris

E V E L I N A H E D S K O G

Master of Science Thesis Stockholm, Sweden 2009

A Real Environment, User Oriented, Evaluation

of the Different Menus in Cetris

E V E L I N A H E D S K O G

Master’s Thesis in Human Computer Interaction (30 ECTS credits) at the School of Media Technology Royal Institute of Technology year 2009 Supervisor at CSC was Åke Walldius Examiner was Ann Lantz TRITA-CSC-E 2009:056 ISRN-KTH/CSC/E--09/056--SE ISSN-1653-5715 Royal Institute of Technology School of Computer Science and Communication KTH CSC SE-100 44 Stockholm, Sweden URL: www.csc.kth.se

A real environment, user oriented, evaluation of the

different menus in Cetris

Abstract

Cetris is a combat management system for navy vessels designed by Saab Systems. This thesis describes the process of, and analyses the results from, usability tests of the Cetris system. The purpose of these tests is to gain knowledge about some of the most important usability aspects of different menus in the system. When designing the menus, developers have followed guidelines in order to make the interface as usable as possible. However, these guidelines have never been tested in the systems real setting. Thus, this study includes usability tests onboard the Visby-class in order to collect usability data within the system’s real context.

More specific, the usability tests conducted are different forms of comparison tests, which evaluate various aspects of the menus such as frequency of use and fastness. Another issue addressed is the question whether redundant information in the menus implies a problem.

Ultimately, recommendations are given in three different levels; recommendations regarding interface design, system usage, and system development. Concerning interface design, this thesis gives a number of hands-on proposals in order to enhance the usability in the system’s different menus. As regards to system usage, the recommendation is that operators are given extensive formal training. Finally, the proposal on system development includes a plan for how Saab Systems can implement an ISO-certified user centered design process in order to obtain more usable products as well as a competitive advantage.

Användarorienterad utvärdering av Cetris

menyfunktioner i skarp miljö

Sammanfattning

Cetris är ett stridsledningssystem för marina stridskrafter utvecklat av Saab Systems. Den här rapporten beskriver genomförandet av, och resultatet från, användartester på Cetrissystemet. Syftet med dessa tester är att få kunskap kring några av de viktigaste användbarhetsaspekterna på menyerna i Cetris. Vid framtagning av systemets olika menyer har utvecklarna följt en samling riktlinjer för att göra gränssnittet så användbart som möjligt. Dessa riktlinjer har dock aldrig testats i systemets rätta miljö. På grund av denna bakgrund kommer därför denna studie att innefatta test ombord Visby-klassen i syfte att samla in användbarhetsdata i systemets rätta kontext.

Mer specifikt kommer de användbarhetstester som genomförs ske i form av jämförande tester, vilka utvärderar olika aspekter av menyerna såsom användningsfrekvens och snabbhet. Ytterligare en fråga som behandlas är huruvida redundans i systemets menyer innebär en problematik.

Slutligen ges rekommendationer beträffande förbättringar i tre olika nivåer; rekommendationer kring gränssnittsdesign, systemanvändning och systemutveckling. Vad gäller gränssnittsdesign ger den här rapporten ett antal handfasta förslag kring hur systemets menyer kan förbättras ur en användbarhetssynpunkt. Angående systemanvändning ges rekommendationen att operatörerna bör få mer formell utbildning. Till sist föreslås en plan för hur Saab Systems ska kunna implementera en ISO-certifierad användarcentrerad designprocess, i syfte att skapa såväl mer användbara produkter som för att ge en konkurrensfördel.

Foreword

When starting out with this master project I had many doubts whether it was realistic to plan for onboard usability tests or not. However, a lot of people have helped out, and made it possible for me to conduct the tests I planned for onboard the Visby-class. Firstly, I would like to thank Göran Hidemo and Michael Johansson from the Swedish Defense Material Administration for arranging my time onboard. I would also like to thank all the operators that participated in the tests, and the Helsingborg-crew in particular. A special thanks to Operations Officer Ulf Welin for arranging all practical details.

Besides everyone who helped out during the test phase of the study, I also want to show my appreciation to Mikael Broms and Åke Walldius, who have been my supervisors at Saab Systems and The Royal Institute of Technology respectively. Thanks for your support throughout the process.

Finally, a big thanks to my family and friends, who always support me in whatever project I throw myself into!

Index Background .................................................................................................................................................... 1

The Cetris system ...................................................................................................................................... 1

The Visby corvette .................................................................................................................................... 1

The Combat Intelligence Central ............................................................................................................. 2

The Console and the different menus in Cetris ....................................................................................... 3

MTD....................................................................................................................................................... 4

TID......................................................................................................................................................... 5

Hardware buttons .................................................................................................................................. 6

Drop down menus ................................................................................................................................. 6

Forms ..................................................................................................................................................... 7

Problem .......................................................................................................................................................... 8

Purpose......................................................................................................................................................... 10

Theory .......................................................................................................................................................... 11

Usability................................................................................................................................................... 11

Concepts in the Usability Hierarchy ...................................................................................................... 11

Usability Engineering ............................................................................................................................. 12

Domain Analysis ................................................................................................................................. 13

Evaluation............................................................................................................................................ 13

Usability Testing ..................................................................................................................................... 14

Comparison tests ................................................................................................................................. 15

Test Methods ........................................................................................................................................... 15

Observations........................................................................................................................................ 15

Contextual Inquiry & Cognitive Walkthrough.................................................................................. 16

Performance Measurement ................................................................................................................. 17

Theory Summary..................................................................................................................................... 18

Method ......................................................................................................................................................... 19

Observations........................................................................................................................................ 19

Contextual Cognitive Walkthrough ................................................................................................... 19

Performance Measurement ................................................................................................................. 20

Choice of Test Methods...................................................................................................................... 20

Test plan................................................................................................................................................... 20

Pilot Tests ............................................................................................................................................ 21

The Onboard Tests .............................................................................................................................. 21

Results & Conclusions ................................................................................................................................ 22

Results & Conclusions in short .............................................................................................................. 22

Summary Test Results ............................................................................................................................ 22

Observations........................................................................................................................................ 22

Contextual Cognitive Walkthrough ................................................................................................... 23

Performance Measurement ................................................................................................................. 24

Written Tests ....................................................................................................................................... 24

The level of Usability in Cetris’ menus ................................................................................................. 25

TID....................................................................................................................................................... 25

MTD..................................................................................................................................................... 25

Dropdown ............................................................................................................................................ 25

Hardware buttons ................................................................................................................................ 26

Contextual factors of the Usability ........................................................................................................ 26

Frequency of Use .................................................................................................................................... 27

Efficiency................................................................................................................................................. 28

Characteristic Usage of Menus............................................................................................................... 29

Quick-TID Usage .................................................................................................................................... 29

Usage of Hardware Button ..................................................................................................................... 30

The role of Redundant Information........................................................................................................ 30

Potential problems with Navigation in Menus ...................................................................................... 30

Insights from the Test Process................................................................................................................ 31

Lessons for the future ......................................................................................................................... 31

Recommendations ....................................................................................................................................... 32

Recommendations in Short..................................................................................................................... 32

Recommendations regarding System Design ........................................................................................ 32

TID....................................................................................................................................................... 32

MTD..................................................................................................................................................... 33

Drop Down Menu ............................................................................................................................... 34

Forms ................................................................................................................................................... 35

Quick-TID ........................................................................................................................................... 35

Hardware Buttons ............................................................................................................................... 35

Recommendations regarding other aspects of the System Usage ........................................................ 37

Recommendations regarding how to improve the User Centered Design process at Saab Systems . 37

Future use of Conclusions and Recommendations.................................................................................... 41

Reflections ................................................................................................................................................... 42

References.................................................................................................................................................... 43

Appendix A – Test Plan .............................................................................................................................. 45

Appendix B – Test Results ......................................................................................................................... 50

Figure 1 The Visby Corvette ........................................................................................................................ 1

Figure 2 The Combat Intelligence Central................................................................................................... 2

Figure 3 The Main Functional Console ....................................................................................................... 3

Figure 4 The MTD-view............................................................................................................................... 4

Figure 5 The MTD Menu.............................................................................................................................. 4

Figure 6 The TID........................................................................................................................................... 5

Figure 7 The Quick-TID into which desired command buttons can be copied......................................... 5

Figure 8 Hardware buttons ........................................................................................................................... 6

Figure 9 Picture of a dropdown menu in the situation picture ................................................................... 6

Figure 10 A Cetris Standard form ................................................................................................................ 7

Figure 11 Interpretation of concepts in the Usability Hierarchy.............................................................. 12

Figure 12 Diagram showing the general elements of Usability Engineering.......................................... 13

Figure 13 Diagram explaining the workflow of this master’s project ..................................................... 18

Figure 14 Frequency of use ........................................................................................................................ 23

Figure 15 Average time to execute a command ........................................................................................ 24

Figure 16 Two different sub menus ........................................................................................................... 33

Figure 17 Dropdown menu in three levels................................................................................................. 34

Figure 18 The Human Centered Design Process according to ISO ......................................................... 39

Abbreviations / Terminology

CCW Contextual Cognitive Walkthrough

CIC Combat Intelligence Central

CMS Combat Management System

DD Drop Down

DDS Data Distribution System

GOC General Operators Console

HCI Human-Computer Interaction

HF Human Factors Engineering

HSwMS His Swedish Majesty’s Ship

HW Hardware

ISO International Standard Organization

MFC Main Function Console

MMI Man Machine Interface

MTD Main Tactical Display

PM Performance Measurement

TID Touch Input Display

UCD User Centered Design

Note

Master’s thesis equals Report and vice versa

Master’s project equals Study and vice versa

1

Background

This chapter will give a brief introduction to the system that is to be evaluated in this study, together

with a description of the context in which the system is operable.

The Cetris system The Cetris system is a Saab-built naval C3 solution, i.e. a platform for Command, Control and Communication. A system like this is also sometimes called a Combat Management System (CMS). Command and control is about decision-making, the exercise of direction by a designated commander over assigned and attached forces in the accomplishment of a mission, and is supported by information technology1. One important capability that C3 systems provide commanders is situational awareness-information about the location and status of enemy and friendly forces.

Today, the Cetris-system is placed onboard the Swedish corvettes of Stockholm- and Visby-class and is integrated with different subsystem such as radars, guns and sonars. In this master’s project, tests onboard the Visby Class have been conducted. The system provides the ship and the command team with operational capabilities in support of all mission types in both open ocean as well as littoral regions. Furthermore, the system is designed to meet asymmetric threats as well as modern and forecasted future threat.

The Visby corvette Visby is the latest class of corvette to be adopted by the Swedish Navy after the Göteborg and the Stockholm class corvettes. The ship's design heavily emphasizes "low visibility" or stealth technology. The ships are designed by Swedish Defense Materiel Administration and built by Kockums. The class has received widespread international attention because of its status as a stealth ship and its network-centric capabilities. The hull is constructed with a sandwich design consisting of a PVC core with a carbon fibre and vinyl laminate and its angular design reduce its radar signature.! ! The armament of the ship consists of one 57 mm gun, sea to sea missiles, anti sub torpedoes and anti sub grenades.

Figure 1 The Visby corvette

1 http://www.c4i.org (081006) 2 http://en.wikipedia.org (081006)

2

The Visby corvette has a number of sensors that supplies the Cetris system with information, and enables the system to create an accurate situation picture. These are:

• Surveillance Radars • Navigation Radar • Electronic Support Measures • Sonars • Remote Operated Vehicles

Information from these sensors, together with the corresponding sensor information received over link from other units, creates a common situation picture in the Cetris system for all cooperating units.

The Combat Intelligence Central The operation of Cetris is conducted in the Combat Intelligence Central (CIC). There are 12 identical operator consoles placed in the CIC, which is centrally located in the ship. The people operating the consoles have different roles and different tasks to perform. Hence, once logging in as a specific role, the operator has access to the tools needed for that particular area of responsibility. How many of the consoles that are manned is dependent on the specific mission and current activities. However, usually all consoles are manned. Normally, the working shifts in the CIC are between 4-6 hours long, but the workload can very much differ over time, and sometimes the shifts are longer. Physically, the work can be challenging due to rough sea during periods of little, or no, sleep. Mentally, the work can be demanding as well, mainly since important and difficult decisions must be made under stressful circumstances. There is no day light in a CIC, and normally, the CICs onboard navy vessels are only lit up partially. However, with the Cetris system, it is possible to work in normal in house lighting3.

Figure 2 The Combat Intelligence Central

3 Broms (1999) s.32

3

The Console and the different menus in Cetris

The Multi Function Consoles (MFC) is used by the operator to control the Cetris system. The MFC is provided with a computer and graphics adapter providing a presentation of the situation picture with an update rate of 30Hz. Radar pictures can be presented overlaid on sea charts, with overlay of plots, tracks, text, graphics and windows. In order to handle the system, the operator has a wide choice of menus. The following is the (non-official) grouping of menus, which will be used in this study:

• Main Tactical Display-menu (MTD-menu) • Touch Input Display (TID) • Quick-TID (user defined menu) • Dropdown menus • Hardware buttons • Forms

Figure 3 The Main Functional Console

Hardware buttons

Quick-TID Forms

MTD-menu

Dropdown menus

TID

4

MTD

The MFC has a number of features. Right in front of the operator, the Main Tactical Display (MTD) is located. The MTD is the main display, which contains three different fields. Firstly, the MTD is where the operator can get an overview of the situation picture. A sea chart is displayed to the left in the MTD, showing information about different targets. Secondly, in the middle of the MTD, a bar menu is displayed. It is this menu that will be referred to as the MTD-menu in this report. Thirdly, to the right in the MTD, different forms can be displayed.

Figure 4 The MTD-view

Figure 5 The MTD Menu

5

TID

The Touch Input Display (TID) is placed right below the MTD. The TID is provided with a touch sensitive membrane, providing soft key capability. The touch sensitive membrane is of resistive kind, i.e. it consists of thin layers that need to be pressed together to be activated, which makes it possible for the operator to wear gloves while working with the console. (Something operators always ware onboard in a combat situation.) The TID is the only menu that handles all possible commands in the Cetris system, and is designed to function as the primary menu. The TID consists of 12 menus, which all have a number of sub menus. The sub menus are further divided into different functions. Function buttons can either execute a command, or display a form in the MTD’s right field.

Figure 6 The TID

Quick-TID

Keys controlling functions, which should be immediately accessible, are collected in a Quick-TID. The Quick TID is an area placed to the right of the main TID. For gun control officers, the Quick TID contains a predefined set of function keys. For other roles, the operator may copy keys from the TID to the Quick TID, allowing the user to facilitate his or her current task.

Figure 7 The Quick-TID into which desired command buttons can be copied

6

Hardware buttons

Between the TID and the operator there is a keyboard. This keyboard is mainly used when handling different forms.

To the left and right of the keyboard, the operator has two track balls. These are used in the same way as a mouse together with a normal PC, but the track balls also have functionalities that are designed to work together with for example the weapon control director.

Finally, there are a number of hardware buttons to the right of the TID. These hard ware buttons are for frequent used functionalities such as ”Apply”, different security switches for gun control and finally buttons to use when powering up the MFC.

Figure 8 Hardware buttons

Drop down menus

The operator can bring up a drop down menu by right clicking in the situation picture to the left in the MTD. If no object in the situation picture is currently selected, one set of functions available will be displayed in the drop down menu. If one or more object is selected, another set of functions will be accessible through the drop down menu.

Figure 9 Picture of a dropdown menu in the situation picture

7

Forms

Forms are used for data presentation and entry. Forms handles primarily textual data, but may also provide pictorial data representation. The two most common forms in Cetris are Standard forms and Parameter forms. The difference between Standard and Parameter forms is that the latter are more temporary in nature and are often used to let the operator enter a single value.

Standard form windows have a standard Windows' title bar. When they are opened, they appear at the upper-right corner of the main display. They can be moved and closed by the operator. Parameter forms also have a standard Windows' title bar and can be moved and closed by the operator. When they are opened, they appear at the lower-right part of the main display.

Figure 10 A Cetris Standard form

8

Problem This chapter gives a detailed description of the problem that is to be examined in this master’s project.

Saab Systems’ MMI Principles, are a set of guidelines that aim to produce the best possible interface for this type of complex systems. They do this by providing a means of interaction that is both intuitive and consistent. They also strive for simplicity and familiarity in order to reduce the time taken to learn and use the system4.

The MMI principles are needed to document and promote good user interface design. They encourage both visual and functional consistency, and they provide a documented set of guidelines, which can be presented to the user allowing visibility of the end product long before it is ready for delivery. This helps familiarize the users with the intended operation of the system and gets them involved in the MMI design at an early stage.5

When developing the Cetris system, the engineers have followed the MMI guidelines described above, in order to make the interface as usable as possible. However, these guidelines are not tested in the systems real setting. Thus, this master project aims to conduct usability tests onboard Navy ships in order to collect contextual usability data. Nevertheless, the MMI guidelines will not be used as a platform for testing. Instead, a number of other ways to verify the usability of the menus in Cetris will be used. These specific methods will be described in the chapters concerning theory and method.

Cetris-operators have a wide range of alternatives on how to execute an action in the system. Almost all commands are possible to perform in two ways, and a number of functions are possible to execute in three ways. This master project will examine how the choices on how to operate the system affect the effectiveness of the user. Is it likely that operators handle the system in ways that are sub-optimizing? If so, is it possible to eliminate ways in which the system loose effectiveness, and are conclusions on the subject possible to apply to other combat management systems?

The problem regarding redundancy in the ways in which the system is operated, is the main focus of this thesis. However, there are a number of related questions to the general redundancy problem. Since Cetris is a big and complex system, it is not possible to (in the time which a master’s project allows) analyze all functionalities that the system offers. Thus, a selection of the functionalities must be made. When doing so, it is preferable to identify tasks that are commonly used, e.g. selecting rage radar selection, creating a manual target or engaging in air defense. An evaluation regarding how operators perform these tasks can then be conducted onboard. There are two criteria for the functions that will be evaluated in the usability study: they should be functions that are commonly used and they should also be possible to perform in multiple ways in the Cetris system today.

Apart from the redundancy problem there are a number of other issues that will be addressed in the study. The first one is contextual aspects of the work performed in the CMS. Are there communication channels outside the Cetris system? If so, in what way do they show? Another question concerns how easy it is to find the right button in a menu. Are operators less efficient than possible due to problems finding the button that will execute their intended command?

4 Saab Systems MMI Principles s.7 5 Saab Systems MMI Principles s.7

9

In short, the problems addressed in this study are:

• How high is the level of usability in Cetris´ different menus?

• What role does redundant information play for the usability of the system?

• Are there any contextual factors that influence the usability of the system?

• Do operators have problems finding the right button when navigating in the menus?

And furthermore, the following issues are discussed:

• What improvements can be done?

• What other general conclusions can be drawn from the study?

10

Purpose

This chapter describes the purpose of this master’s project and in what way it can contribute to future

system interface design at Saab Systems.

The main purpose of this master’s project is to gain knowledge about some of the most important usability aspects of menus in the Cetris system, for beginners as well as for experienced users. Furthermore, the aim is to draw more general conclusions regarding the design of menus in CMS’s, which later can be applied to other Saab products. This thesis will also give proposals for improvements of the menus, examine whether redundant information is a problem or not and examine how relevant non-represented information is affected given the specific context.

As a whole, this project and thesis will contribute with insights concerning usability testing of CMS’s onboard, in contrast to the earlier lab based tests that has been conducted. This will increase the validity of the test results, and provide a platform for future usability tests onboard. Furthermore, Saab Systems are today developing the technology for the fourth generation of combat management systems, also known as Mk4. The aim of this thesis, to be able to draw important lessons regarding the design of menus, and facilitate implementation of these thoughts into the Mk4-generation of combat management systems.

11

Theory

This chapter aims to sort out the complex terminology of the Usability-field, and at the same time

highlight the methods that are used in this study. However, this is just one interpretation of the

relations between different usability disciplines, and not to be seen as the “truth”.

Usability Before we can discuss the terminology of the usability field, it is important to know what exactly usability is. The standard ISO 9241-11 (1998) Guidance on Usability defines usability as:

The extent to which a product can be used by specified users to achieve specified goals with

effectiveness, efficiency and satisfaction in a specified context of use.

The ISO 9241-11, is a central standard in usability, explaining how to identify the information that it is necessary to take into account when specifying or evaluating usability in terms of measures of user performance and satisfaction. Guidance is given on how to describe the context of use of the product (hardware, software or service) and required measures of usability. More important, it also includes an explanation of how the usability of a product can be specified and evaluated as part of a quality system6. More information regarding Usability ISO standards is presented in the recommendations chapter.

Concepts in the Usability Hierarchy In this thesis, a number of key words such as Usability Engineering and Usability Testing will be used. In the following, an explanation to how these, and other usability concepts are interrelated will be given.

At the top of the Usability Hierarchy we find Human Factors Engineering (HF). HF includes human interaction with any tool, not just computers. Human-Computer Interaction (HCI), which is the academic discipline under which this thesis is sorted, is a sub-field of HF and deals with people and computers. One additional level down in the methodological hierarchy, we find User Centered Design (UCD).

UCD is a joint term that denotes a method or tool to achieve user centered products. Hence, there are several species of UCD and they all have different inventors and by that, different approaches. The different UCD disciplines and their original creators are7:

• Contextual Design – Karen Holtzblatt’s version of UCD • Experience Design – Paula Thornton’s version of UCD • Interaction Design – Alan Cooper’s version of UCD • Usability Engineering – Jakob Nielsen’s version of UCD

6 www.usabilitypartners.se (080715)

7 www.stc.org (081030)

12

This thesis will focus on Usability Engineering, and hence describe that UCD-discipline more closely.

Figure 11 Interpretation of concepts in the Usability Hierarchy

Usability Engineering Usability Engineering produces highly usable user interfaces that are essential to reduce manning, reduce human error and increase productivity. Unfortunately, developers often have the miscomprehension that usability engineering activities add cost to a products development life cycle. In fact, usability engineering typically reduces the costs over the life of the product, by reducing the need to add missed functionality later in the development cycle, when such additions are more expensive8. The usability engineering process is flexible enough to be applied at any stage of the development life cycle, although early use of the process provides the best opportunity for cost savings. In the following, a brief description of usability engineering activities will be described.

Usability engineering consists of numerous activities and includes design as well as evaluation with users; it is not only applicable to the evaluation phase. Hence, usability engineering is not typically hypothesis-testing-based experimentation, but instead is structured, iterative user-centered design and evaluation applied during all phases of the interactive system development lifecycle9.

The Usability Engineering-approach to UCD can be divided into three sub-categories, which all contain different activities. These categories are:

• Analysis & Design • Development • Evaluation

Since Usability Engineering is very much about iteration, these categories are all interrelated.

8 Gabbard et al. (2003) s.1 9 Gabbard et al. (2003) s.1

13

Figure 12 Diagram showing the general elements of Usability Engineering

As mentioned earlier, the different elements of Usability Engineering consists of a number of activities. In the following, theory around the activities involved in this study will be described.

Domain Analysis

Domain analysis is part of the Analysis & Design category and is the process by which answers to two critical questions about a specific application context are determined10:

• Who are the users? • What tasks will they perform?

Interviewing an existing and/or identified user base, along with subject matter experts and application “visionaries” can provide very useful insight into what users need and expect from an application.

Domain analysis generates critical information used throughout all stages of the usability engineering lifecycle, but are in this study mainly a foundation for the usability testing. A key result is a top-down, typically hierarchical decomposition of detailed user task descriptions, which serve as an enumeration and explanation of desired functionality for use by the designers and evaluators, as well as required task sequences. Other key results are one or more detailed scenarios describing potential uses of the application and a list of user-centered requirements11. For this project, all of these can be found in the test plan, appendix A.

Evaluation

As will be explained later, Usability Testing is a sub category to usability evaluation, and since this study is based on Usability Testing, the evaluation part of Usability Engineering is definitely the most applicable in this thesis. There are different approaches to usability evaluation, and which form of evaluation that is used is typically associated with different stages of the product life cycle. In this project, a mix of practices called Formative evaluation and Summative evaluation is used.

10

Gabbard et al. (2003) s.2 11

Gabbard et al. (2003) s.2

14

Formative evaluation is the process of assessing, refining and improving a user interface design by having representative users perform task-based scenarios, observing their performance and collect data to empirically identify usability problems. A typical cycle of a formative evaluation begins with creation of scenarios based on the user task analysis. Representative users perform these tasks as evaluators collect both qualitative and quantitative data. Both qualitative and quantitative data are equally important since they each provide unique insight into a user interface design’s strengths and weaknesses12.

Summative evaluation, in contrast to formative evaluation, is a process that is typically performed after a product or some part of its design is more or less complete. Its purpose is to statistically compare several different systems or candidate designs, for example, to determine which one is “better”, where better is defined in advance. In practice summative evaluation can take many forms such as the comparative field trial or expert reviews. However, this is the most costly type of evaluations because it needs large numbers of users to achieve statistical validity and reliability and because data analysis can be complex and challenging13.

In this master’s project, the product that will be tested is already in use. Thus, the evaluation will have a mixed formative and summative approach.

Usability Testing Usability testing refers to a process that employs participants who are representative of the target population to evaluate the degree to which a product meets specific usability criteria. This inclusion of representative users eliminates labeling as usability testing such techniques as expert evaluations and other techniques that do not require users as part of the process14. Hence, it is important to stress the difference between the terms usability evaluation and usability testing.

Usability testing is a research tool, with its roots in classical experimental methodology. The range of the tests one can conduct is considerable, from true classical experiments with large sample sizes and complex test designs, to very informal qualitative studies with only a single participant. Each testing approach has different objectives, as well as different time and resource requirements. The testing approaches that are used in this study are described more closely in the method-chapter.

The overall goal of usability testing is to identify and rectify usability deficiencies existing in computer-based and electronic equipment. The intent is to ensure the creation of products that are easy to learn and use, are satisfying to use and provide the utility and functionality that are highly valued by the target population15.

However, conducting usability tests does not give you a 100%-guarantee for usability and success, and it is important to understand its limitations. The reasons for this are numerous, e.g. the fact that testing always creates somewhat an artificial situation or that participants are rarely fully representative for

12

Gabbard et al. (2003) s.3 13

Gabbard et al. (2003) s.3 14

Rubin (1994) s.25 15 Rubin (1994) s.26

15

the target population. But in spite of limitations like this, usability testing, when conducted with care and precision, is an almost infallible indicator of potential problems and means to resolve them.

The origin of the basic methodology for conducting a test is taken from the classical approach for conducting a controlled experiment. With this formal approach, often employed to conduct basic research, a specific hypothesis is formulated and then tested by isolating and manipulating variables under controlled conditions. However, it is often impossible or inappropriate to use such methodology to conduct usability tests due to the ever so fast changing development environments. In addition, the amount of prerequisite knowledge of experimental method and statistics required in order to perform these kinds of studies properly is considerable16. In contrast to controlled experiments such as described above, stands a more informal, iterative approach to testing, albeit with experimental rigor at its core. This project will apply a series of quick pointed studies like that, instead of one single, and less flexible, controlled experiment.

Comparison tests

The usability tests conducted in this study can be classified as something called comparison test, and are not associated with any specific point in the product development life cycle. It is used to compare two or more alternative designs, such as two different interface styles, or the current design of a manual with a proposed new design, or to compare a product with a competitor’s. The comparison test is typically used to establish which design is easier to use or learn, or to better understand the advantages and disadvantages of different design. Furthermore, it can be conducted informally as an exploratory test, or it can be conducted as a tightly controlled classical experiment, with one group of participants serving as a control group, and the other as the experimental group. The form used is dependent on what the goals of the test are. For exploratory comparison tests, experience has shown that the best results and the most creative solutions are obtained by including wildly differing alternatives, rather than very similar ones17.

To summarize the discussion about usability evaluation and testing, one could say that in terms of evaluation, a mixed formative and summative approach is taken in this study. Regarding the usability tests performed, they are primarily of a comparing nature. The different types of tests are presented more closely below.

Test Methods In the following paragraphs, theory around different test methods used in this study will be presented. However, this is only to serve as background before reading the method chapter in which they are described more in detail.

Observations

There is a variety of structured, less structured and descriptive observation techniques for evaluators to choose from. In deciding which technique to use, it is important to always identify a goal that provides focus for the observation. Having a goal, even a very general goal, helps to guide the observation because there is always so much going on. Observing is useful any time during product development. Early in design, observations help designers understand the users needs. Other types of observation are

16

Rubin (1994) s.26 17

Rubin (1994) s.41

16

done later to examine whether the developing prototype meets users needs. Depending on the type of study, evaluators may be onlookers, participant observers or ethnographers18.

The same basic tools are used for laboratory- as well as field studies, but the way in which they are used is different. In the laboratory, the emphasis is on the details of what the individuals do, while in the field, the context is important and the focus is on how people interact with each other, the technology and their environment. Furthermore, the equipment in the laboratory is usually set up in advance and is relatively static, whereas in the field it usually must be moved around19.

Data collection techniques (i.e. taking notes, audio- and video recording) are used individually or in combination and are often supplemented with photos from a still camera. When different kinds of data are collected, evaluators have to coordinate them. This requires additional effort, but has the advantage of providing more information and different perspectives. Interaction logging and participant diary studies are also used. Which techniques are used will depend on the context, time available and sensitivity of what is being observed. In most settings audio, photos and notes will be sufficient. In others it is essential to collect video data so as to observe in detail the details of what is going on20.

Data from think-aloud protocols, video or audio transcripts can be analyzed in different ways. These can be course-grained or detailed analyses of experts from a protocol in which each word, phase, utterance or gesture is analyzed. Sometimes examining the comment or action in the context of other behavior is sufficient21.

In this study, observations will occur in different forms. Initially, a smaller field study will be carried out, in which the contextual aspects of working in the combat intelligence central onboard Visby-corvettes are examined. After that, other forms of observation will take place. These are presented in the following paragraphs.

Contextual Inquiry & Cognitive Walkthrough

Contextual Inquiry is an approach to ethnographic study used for design that follows an apprenticeship model: the designer works as an apprentice for the user. The most typical format for contextual inquiry is a contextual interview, which is a combination of observation, discussion and reconstruction of past events. Contextual inquiry rests on four main principles: context, partnership, interpretation and focus.!!

The context principle emphasizes the importance of going to the workplace and seeing what happens. The partnership principle states that the developer and the user should collaborate in understanding the work; in a traditional interviewing or workshop situation, the interviewer or workshop leader is in control, but in contextual inquiry the spirit of partnership means that the understanding is developed through cooperation. The interpretation principle says that the observations must be interpreted in order to be used in design, and this interpretation should also be developed in cooperation between the user and developer. The fourth principle, the focus principle, regards how to know what to look for. In contextual enquiry it is important that the discussion remains pertinent for the design being

18

"#$$%$!$&!'()!*!++!,!-)!./0 19

"#$$%$!$&!'()!*!++!,!-)!./1 20

"#$$%$!$&!'()!*!++!,!-)!.21 21

"#$$%$!$&!'()!*!++!,!-)!.30 22!"#$$%$!$&!'()!*!++!,!-)!!43!

17

developed. To this end, a project focus is established to guide the interviewer, which will then be augmented by the individual’s own focus that arises from their perspective and background23.

Cognitive walkthrough is a review technique where task scenarios are constructed and test monitors then help experts “playing” users, or sometimes, real users, to “walk through" the interface in order to find problematic usability features24. In this study, the variation of letting real users perform the walkthrough will be used. The tasks and scenarios are, as earlier explained, based on the domain analysis and aims to pinpoint usability problems for the most frequent used features of the system.

However, instead of performing the Contextual Inquiry and the Cognitive Walkthrough respectively, a combination of these two will be used in this study, and are from now on called a Contextual Cognitive Walkthrough (CCW). The CCW is discussed more closely in the Method chapter.

Performance Measurement

Most usability tests are about qualitative data. However, some are targeted at determining hard, quantitative data. Most of the time this data is in the form of performance metrics - how long does it take to select a block of text with a mouse touchpad, or trackball? How does the placement of the backspace key influence the error rate? Often these metrics are used as goals during the design of a product. Goals can be stated as stipulations, for example, "Users shall be able to connect to the Internet without errors or having to call the toll-free number," or "75% of users shall be able to complete the basic task in less than one hour." These benchmarks are devised during initial usability testing, either of a previous release, or of a competitor product25.

Performance measurement is commonly used in initial stages of design to provide benchmarks for the design process. It's also used during the design cycle to measure the work done thus far against those benchmarks!/. In this study, different menus will be benchmarked against each other.

As in other test methods, the test objectives have to be expressed in testable terms, but when measuring performance, they have to be quantifiable. For example, in this study questions like, "What's more efficient, keyboard shortcuts or toolbar buttons?", will be asked. The performance of each user will be recorded and protocol will be kept on how long it took the users to execute different commands.

Since the goal of a performance measurement test is to gather valid quantifiable data, the experimental design must be valid as well. Quantitative tests assume that your change in the independent variable (for example, the presence of keyboard shortcuts or toolbar buttons) influences the dependent variable (time it takes to execute commands using one of the two options). This influence is called the experimental effect. However, if other factors are introduced into the design, the effect may be confounded, that is, not statistically valid due to tainting by the other factors. Your design must take into account possible confounding factors and eliminate possible sources of tainting27.

23 Preece et al. (2002) s. 299 24 www.wikipedia.com (081023) 25 http://jthom.best.vwh.net (080703) 26 http://jthom.best.vwh.net (080703) 27 http://jthom.best.vwh.net (080703)

18

Theory Summary To conclude the discussion on theory around Usability Engineering in general and this study in particular, one could say that this project mainly handles the evaluation segment of Usability Engineering. However, analysis is also a big part of the study in order to design the different usability tests. Furthermore, the usability tests performed are of a comparing nature, and consist of observations, cognitive contextual walkthrough as well as performance measurement. Together, the test results gained will eventually render conclusions and recommendations. This diagram below explains the workflow:

Figure 13 Diagram explaining the workflow of this master’s project

19

Method

The three different tests, which will be conducted in this study, were generally described in the theory-

chapter. In this chapter they will be described more in detail, and applied to the specific context

onboard a navy vessel. Additionally, discussion on why these specific tests were chosen for the study

will follow.

Observations

Observations will be the first step of three in a series of usability tests. The observations will take place in the combat intelligence central, and will be divided into two categories: general observation and guided observation. The general observation will not have any guidelines at all. In these observations, anything that seems important do know in terms of usability, will be documented. In the guided observations, a framework will be used, in order to more effectively keep the goals and questions of the study in sight. Preece et al. (2002) suggests the following framework, which originally was created by Colin Robson.!3

• Space. What is the physical space like, and how is it laid out? • Actors. What are the names and relevant details of the people involved? • Activities. What are the actors doing and why? • Objects. What physical objects are present, such as furniture? • Acts. What are the specific individuals doing? • Events. Is what you observe part of a special event? • Goals. What are the actors trying to accomplish? • Feelings. What is the mood of the group and the individuals?

The observations will be conducted during so called SAT:s, Sea Acceptance Tests, where some specific features of the Cetris system will have their final tests before handed over from the Swedish Defense Materiel Administration to the Swedish Navy. This implicates, that during the observations, the work in the combat intelligence central will be focused on some specific tasks, in contrast to “normal” work routine onboard a navy vessel. However, the specific features that are tested are quite complex and comprehensive. Hence, it is liable that an observation carried out parallel to those tests is likely to have a high reliability.

Contextual Cognitive Walkthrough

The CCW’s are probably the most critical part of the usability tests performed in this study. In these tests, different operators, with different roles will be asked to perform different tasks on the MFC, using diverse menus. The tests will consist of different tasks and scenarios, which the test leader will present to the operator. When the operator carries out the assignment, he/she will be asked explain his/her actions. The whole procedure will be recorded on video. In this test, it is very important to get the operator to show and explain his/her behavior in the given scenarios, but it is equally important to be attentive to issues that the operator addresses spontaneously. The test is designed to let the user explain how he/she uses the system in its real environment and it is expected that the mixture of both scenarios and the possibility for operators to freely discuss the usability of the system, will give important input to the study. The tasks and scenarios for the different operators can be found in appendix A.

28!"#$$%$!$&!'()!*!++!,!-)./3!

20

Performance Measurement

The performance measurement test will have some similarities to the contextual cognitive walkthrough. All users will be given the same tasks to perform in specific menus. The assignments will be relatively general and illustrate common commands that all operators perform in the system. Since all participants in the test will perform the exact same tasks, the data regarding time to complete the task, will be a trustful source of quantitative data. The overall goal with this test is to examine whether any of the ways to execute commands is more efficient than others. For example, in terms of efficiency, is it preferable to create new manual targets through the drop down-menu, or is it better to execute the command via the TID? This stands in contrast to the contextual cognitive walkthrough, which instead could focus on whether it is a problem that commands can be executed in many ways.

Choice of Test Methods

Observations, contextual cognitive walkthrough and performance measurement were selected as test methods for a number of reasons. Firstly, observations were planned in order to collect contextual data. The combat intelligence central onboard a navy vessel is very different from e.g. a normal office environment, and hence a testing method to gather all these differences is central to the study.

Furthermore, the cognitive contextual walkthrough is a classic cognitive walkthrough although this test, as well as the observations, has the advantage of collecting contextual data since it takes place in the operator’s factual environment.

Finally, since this master’s project has a comparing approach, the performance measurement test appears to be a straight forward way to envision the difference between menus in terms of time efficiency.

Test plan The test plan serves as a foundation, a blueprint, for the usability tests. It addresses the how, when, where, who why and what of the tests, and describes everything in detail so that there are no loose ends.

Test plan formats will vary according to the type of test and the degree of formality required in the organization. However, the following are the typical sections to include:!4

• Purpose • Problem statement/test objectives • User profile • Method • Task list • Test environment/equipment • Test monitor role • Evaluations measures • Report contents and presentation

The specific test plan for this study can be found in appendix A.

29!56789!*0441,!-)3.!

21

Pilot Tests

To test the design of the different usability tests, a number of pilot tests were conducted. The pilot tests took place in a laboratory environment at Saab System, and the “pilots” were former navy officers now working at Saab Systems. These sessions rendered a lot of important information on the test design. For example, the order of different tasks was changed in order to get a more natural “flow” in the scenarios.

Furthermore, the pilot tests were also an important tool in the design of the written tests, since they gave important input on how a user interpreted the questions, and how the questions could be redesigned to avoid miscomprehensions.

The Onboard Tests

The usability tests were conducted during two weeks in Karlskrona, onboard HMS Helsingborg with a total of 13 operators. The setting for the tests was the combat intelligence central. The tests took approximately one hour per participant, and consisted of the pretest questionnaire, the contextual cognitive walkthrough, the performance measurement and thereafter the written posttest. Observations were carried out separately during so called sea acceptance tests when the CIC was fully manned.

The users involved in the real study were operators onboard HSwMS Helsingborg, HSwMS Nyköping and personnel from the Sea Trials Unit Visby. The majority of the test participants were officers, but conscripts were also involved. Furthermore, the different operator all had various roles in the combat intelligence central and were a wide spectrum concerning age, educational status, experience from other CMS´s etc. A complete summary of the different test participants backgrounds can be found in appendix B.

22

Results & Conclusions

This chapter gives an overview of test results gathered in the different usability tests, together with

conclusions regarding the different questions stated in earlier chapters. To start with, focus will be on

high-level results and conclusions, which are sprung from the questions stated in the problem-chapter.

Later, low-level results and conclusions will be discussed, which originates from the questions

proposed in the test specification. The results and conclusions are to be seen as a combined picture of

all the collected material in this study; usability tests, written inquiries, interviews as well as

observations

Results & Conclusions in short

• Users give Cetris an average of seven on a scale from one to nine in terms of usability (where one is least usable and nine is most usable).

• Three main contextual factors affect the usability of the system; weather conditions, endurance & communication.

• The TID is the most frequently used menu.

• Hardware buttons are the fastest way to execute a command.

• Personalization of the quick-TID is considered a main advantage.

• The MTD-menu is mainly used for different status checks.

• The Dropdown menu is considered easy to use in its first level, but complicated in its submenus.

• Operators almost always carry out the most frequently used commands in the same way.

• The redundancy of the system is considered an advantage and does not imply any difficulties for the operators to find the right button when executing commands.

• The use of hardware buttons is moderate.

Summary Test Results In the following paragraphs, a short description of the raw data from the different usability tests will be given.

Observations

The observations were carried out according to the guidelines presented in the theory chapter, and resulted in a chart that can be closer studied in Appendix B. However, the aim of the observations was to identify contextual factors that influence the usage of Cetris. Three main factors were identified, which will be discussed more closely in the next chapter:

• Weather conditions • Endurance • Communication

23

Contextual Cognitive Walkthrough

In the CCW, users were given 14 different tasks. The characteristics of these tasks were that they could be performed in two menus or more, and the main aim of the test was to explore what menu operators preferred to use and if this varied depending on the task at hand. The CCW was also a test in which the operator’s comments on different aspects of the usability could be captured. In short, one could say that the CCW focused on gathering as much usability data on the menu handling as possible, both quantitative as well as qualitative.

The chart below summarizes the test results in terms of which menu was used for the different tasks. (However, sometimes operators used multiple menus for just one task.) Comments and remarks about the qualitative data collected in the CCW are further discussed in the next chapter.

As the chart shows, the TID is the most commonly used menu, followed by forms, Quick-TID, MTD and DD. The least used menu are hard ware buttons. However, only a limited number of commands can be carried out through these. The complete table summarizing the CCW can be found in appendix B.

Figure 14 Frequency of use

24

Performance Measurement

The chart below illustrates the average time it takes for an operator to execute a command in the TID-, MTD-, Dropdown- or Hardware menu respectively. It is clear that the Hardware buttons are the fastest way to execute a command, followed by the TID, DD and MTD. The complete table summarizing the performance measurement can be found in appendix B.

Figure 15 Average time to execute a command

Written Tests

Two written tests were conducted in this study: the pretest questionnaire and the written posttest. These two consisted of both quantifiable data as well as subjective comments. Since it is not possible to make the results of these written tests understandable by summarizing data in a chart, the answers collected in these parts of the usability tests can be found in appendix B. However, even though the data cannot be represented here, it is important to stress the value of the written tests, since they give great insight to the users subjective opinion on the usability of the system in relation to more objective data such as age, educational status etc.

25

The level of Usability in Cetris’ menus This thesis work has collected data regarding the usability of menus in the Cetris system. The material shows that the overall usability in Cetris different menus is good, but can of course improve.

Regarding the subjective opinion of end users, they give Cetris an average of seven in terms of usability on a scale from one to nine. An interesting observation here is that both operators who have experience from other combat management systems, as well as those who haven’t used a CMS before, rates Cetris usability to around seven. Different menus have however different strengths and weaknesses. I the following, different menus will be surveyed, and both their pros and cons will be addressed.

TID

The advantage with the TID is the fact that it contains all features of the system, and is logically structured into different sub- and function menus. The buttons are big and easy to press, which can be an advantage in case of rough sea. Once a function is selected the operator gets feed back as the button turns yellow. What users consider the main advantage with the TID is the possibility to personalize your quick-TID. Most users maximize the functionality of the quick-TID by selecting as many vital commands as possible, and a lot of the operators in the study also request a bigger quick-TID.

The disadvantages with the TID have strong connections with its advantages. The fact that it offers all features possible sometimes makes it hard to navigate in. It also has the implication that users have to change between different menus in the same scenario. For example the gun control officers have to alternate the Gun and the Weapon control director menu under an incoming air raid. Furthermore, the quick-TID, which is extensively used by the operators, is not possible to save. This means that a lot of time is used to apply quick-TID settings each time the system is started. Yet, this does not stop operators from using it.

MTD

Advantages with the MTD menu appears to be the fact that it gives the operator a good overview and that it does not require a change of focus, since it is located next to the situation picture. However, to a very large extent, operators does not use the menu for anything but checking different statuses, such as information about a hooked target, what settings that are selected for range, speed etc., and communication information such as the callsign for different ships. Function buttons, for example those in the “quick”-tab is very seldom, or never, used.

Hence, the main disadvantage with the MTD menu is that it takes up a lot of place and is not very frequently used for other things than status checks (which is only a small part of the menus functionality). It also put a lot of unnecessary information in the focal point of the user.

Dropdown

A big advantage for the dropdown menu is its ease of use. It self-adjusts its length to only the selectable options, depending on if a target is hooked or not, and has the benefit of being dynamically positioned in the situation picture. New users often turn to the dropdown menu when looking for different commands.

26

Even though most of the users find the dropdown menu quite fast, this only applies to functions that are selectable in the “first” level. However, most commands require the user to navigate through one or two sub menus, which instead makes the dropdown menu quite principal warfare officer. It also requires a precision that can be hard to obtain in rough sea. Another problem with the dropdown menu is also that it to some extent has a different terminology than the TID. Examples of this is “Report to” and “Data Fusion”.

Hardware buttons

The functionality of the hardware buttons is quite limited. However, the usability is good, due to the simple and fast interaction the hardware buttons allow. Experienced users find the right button without even looking. Regarding the keyboard, it provides a lot of different short cuts and can be used to speed up the process of working with forms.

Most of these physical buttons are commonly used. However, some of them seem to be totally unnecessary. The drag and drop button, and to some extent the joystick button, is not used at all (not even by Gun control officers). Furthermore, the keyboard could be more frequently used. Today, very few of the operators knows how to use different shortcuts. Since a lot of work seem to be focused on handling different forms, a better understanding of keyboard usage could definitely speed up the work process. In terms of placement, the keyboard is centered in the GOC. However, since the keyboard has a num pad, it makes all alphabetical buttons left centered, which some users find annoying.

Contextual factors of the Usability There are a number of contextual factors that influence the usability of the Cetris system. Many of these are thoroughly addressed already, since the environment in a combat intelligence central is very different from for example a normal office environment. In this study, three main contextual factors that affect the usage of Cetris have been identified. These are:

• Weather conditions • Endurance • Communication

Weather conditions in general, and rough sea in particular, constitutes one of the main contextual factors for Cetris. Working in a moving environment can lead to performance decrements for several reasons, including motion sickness and concomitant reduction in motivation, as well as direct biomechanical influencers30.

A second contextual factor for Cetris is that operators sometimes use the system under long periods of time out at sea. In intervals of four or six hours at a time, they are to be focused on their task. This reality imply that it is demanded of the system to, as far as possible, maintain and enhance the endurance of the operator.

30

McCauley et al. (2007) s.1

27

The third contextual factor that will be addressed is communication. Cetris provides a solid base for communication with other units, and the basic idea with the whole combat management system is to communicate a cohesive situation picture. In addition to this, there is also ongoing communication between different operators at all times. In contrast to the communication between units, the communication between operators in the combat intelligence central is mainly through the intercom system and hence, not visible in Cetris. This means, that a lot of information, which is critical for Cetris’ operators to successfully complete their tasks, is not sent through the system itself.

These three contextual factors will be regarded when developing recommendations for the design of the system’s different menus.

Frequency of Use When starting out with this master’s project the starting point was that the scenarios described to the operators would be performed in one singe menu. However, the tests have shown that operators often use different menus when performing one single task. For example, when an operator is asked to create a manual target and set its category, id and type to certain values, the operator might create a new manual target in the TID, set category and id in the target form and then choose track type in the dropdown menu. Hence, three different menus are used to perform one task, while another operator might solely use the dropdown menu to achieve this. Thus, the question about what menu that is used most frequently, is not easy to answer. Futhermore, the scenarios that operators have conducted, can be performed in different amount of menus. Sometimes in two menus, sometimes in three. However, they are not always the same ones. For example; selecting range can be done in both the TID and dropdown menu, while a call sign can be checked in either the MTD or a form opened in the TID. This also makes the comparison between menus difficult to achieve in a statistically correct way.

To be able to answer the question above, one must make a subjective estimation from the data collected. Nevertheless, if we first start by solely comparing the raw data in Figure 14, it is clear that the TID is the menu used most frequently. Since the TID is the only menu that contains all the functionalities of the system, this seems quite natural.

The second most frequently used menu is different kind of forms. Many of the users have different forms displayed at all time to the right in the MTD and therefore it is convenient for them to execute commands or look up information in these forms. Apart from already open forms, operators sometimes prefer to open forms, for example through the TID, and then execute commands in these.

The third most frequently used menu is the quick-TID. Before starting each test, the operator was asked to set up his or her quick-TID as they normally do. This turned out to be very important, since a lot of the commands that were tested in the study, i.e. commands that are possible to execute in different menus, were also in a number of cases, the buttons which operators choose to put in their quick-TID. Hence, one conclusion is that it doesn’t matter if a frequently used command is located in a number of menus, if the command is crucial to the operator he or she will put it in the quick-TID anyway.

The next menu in terms of occurrence is the MTD. However, the tasks where the operators has chosen to use this menu is almost exclusively for checking status of different kinds, for example course and speed of a hooked target or the current callsign for another ship. In some cases the MTD menu is also

28

used when zooming in or out on the sea chart. Hence, a conclusion here is that the operators mainly prefers to passively use the MTD menu, for example when checking status of different kinds.

Next is the dropdown menu. The opinions about the dropdown menu appear to differ a lot between operators. Some like it very much, and think it is practical and allows the operator to handle functions without loosing focus of the situation picture. Others think the opposite, they don’t like the fact that the dropdown menu takes up space in the situation picture, and they also think it takes to much time navigating in the dropdown menu. One observation is that operators that have little or no formal training on the system seem more satisfied with the dropdown menu than those who have been formally educated on the Cetris system.

The last menu to be evaluated is the hardware buttons. The hardware buttons are ranked last in terms of frequent use in Figure 14, but this is likely because they cannot provide the same range of functions as the other menus. Although this is the case, some of the hardware buttons are used most frequently. For example, Center- and Off centre own ship are commands which almost 100% of the users prefer to execute through the hardware buttons.

To conclude, one can say that the TID is the most frequently used menu, but that other menus tend to complement it. Because of statistical issues, due to the complexity of the system, it is hard to draw any real conclusions solely from the numbers on frequency of use. Instead, it is more interesting to look closer at the questions below, and after that draw potential conclusions regarding changes in the menus to improve the Cetris system.

Efficiency Figure 15 shows the average time needed to execute commands in different menus, i.e. TID, MTD, dropdown and hardware buttons. The diagram indicates that the TID usually is the second fastest way to execute a command. Next is the dropdown menu and in fourth place we find the MTD. However, since the MTD (as discussed above) normally is used by the operators to check different kinds of status, they sometimes have a hard time finding the right button and hence, it takes long time to execute commands in the MTD.

Due to the restricted functionality of the hardware buttons, only one function was tested in both TID, dropdown as well as hardware buttons. Here, it was clear that the hardware buttons are the fastest way for the operator to carry out a command. However, only a small number of functions can be handled through these.

Another interesting observation is the fact that the quick-TID is very important for all operators, and that most of the users maximize the functionality of their quick-TID in order to be able to execute as many of their most frequently used commands as fast as possible. This implies, that the quick-TID probably is as fast almost as the hardware buttons, since they are placed on the same distance from the track ball, and neither of them demands navigation through higher levels before finding the right button.

29

Characteristic Usage of Menus We have earlier discussed which menu that is most frequently used. But which menu is used when executing a specific command, for example when creating a new manual target? This appears to be very individual and differs between different commands and users. However, the level of formal training seems to play a role here. The test results indicate, that operators who are self taught uses the dropdown menu to a higher extent than those who are not. Perhaps this has to do with the fact that Cetris follow Windows standards, and that it might feels natural for a beginner to start looking for a command by right clicking in the situation picture. Once he or she has found the command in the dropdown menu, they continue to use the command through that menu. This could in turn mean that the operator’s actions are sub-optimizing the system, since the dropdown menu in many cases has been shown to be a slower alternative than other menus such as the TID.

But does a single operator always carry out commands in the same way? The answer to this question is both yes and no. If it is a command that is frequently used, it is probably in the operators quick-TID and hence the command is always executed through that menu. However, if it is a command that is not so frequently used, operators often vary the way in which they carry out that command. For example; if the “right” function menu is displayed at the moment, the operator might use it, but if it is not he or she might use the dropdown menu instead. The bottom line is however that the most important and frequently used commands are always carried out in the same way.

Quick-TID Usage Different operator places different buttons in their quick-TID. However, the functions of these buttons are always closely connected to the main task of the operator. For example, conscripts with the task to plot air or surface targets mainly has functions such as target handling, range, text message and buttons for radar selection in their quick-TID, while a Sub Warfare Officer instead chooses buttons to handle pointers or torpedoes.

For air warfare officers handling functions such as gun and weapon control director, the quick-TID is almost full of predetermined choices. All but one of these functions are possible to find elsewhere in the TID menu, and some of the predetermined buttons are, according to the operators, not critical for their task. Hence, these operators requests either a “clean” quick-TID, or a bigger one which allows them to complement the predetermined commands with their own choice of buttons.

Overall, operators are extremely content with the quick-TID since it secures a fast handling of the system and allows them to develop their own methodology. Thus, many operators request even more ways to customize their MFC, for example through a bigger quick-TID or programmable shortcuts. However, all operators are confused about the fact that you cannot save your quick-TID settings. Since they usually sit at the same MFC, it would be preferable to save the settings locally on that specific console, and by that save a lot of time each time you start up the system.

30

Usage of Hardware Button Operators do use hardware buttons, but to different extent. If we start with the yellow buttons placed right above the track ball, almost everyone prefers the hardware buttons for “center- and off center own ship”, and many also uses the “apply” and “insert track ball”. Some also uses “ack” to acknowledge alarms, but this is not quite as common. One explanation to this is the fact that when you acknowledge an alarm, you probably want to read the whole message. Hence, it is more useful to click on the alarm in the MTD, and by doing so be able to read the message attached to the alarm.

Concerning the track balls and their buttons, different users use them in different ways. To start with, the Gun control officer uses the left trackball. Left handed operators also uses the left track ball sometimes, but prefers the right one, since a lot of other buttons are placed in close connection to this one. Regarding the track ball buttons, very few of the users knew the function of the “drag and drop” or “joystick” button. However, some Gun control officers use the joystick button, but none of the participants in the study used the “drag and drop” button.

Regarding the keyboard, most participants in the test used it moderately other than for letters and numbers. Some used the tab button when filling out a form, and then enter to apply/ok, but most operators used the track ball to navigate between the different fields in a form. Very few knew any short cuts such as ctrl-I to insert or F4 to view a dropdown list in a form.

The role of Redundant Information All users participating in the study claims that the redundancy enhances the systems usability. However, only one of them says he or she always executes commands in different ways. The other twelve claims that they always, or almost always, carry out commands in the same way. (What is decisive here is a question of what TID-menus that are displayed at the moment, how stressful a situation is etc.) From the conclusions which showed that different menus have different strength and weaknesses, it is probable that users do not maximize the effectiveness of the system when always using the same way to execute a command.

Observations also show, that even though most users always carry out tasks in the same way, the particular operating method differs between operators. Hence, on a system level, all different options of how to execute a command are used, but on an individual level, the redundancy is seldom utilized.

Potential problems with Navigation in Menus To further investigate the redundancy of the Cetris system, one of the hypotheses when starting out with this master’s thesis was that redundant information makes the system hard to navigate in. However, interviews have shown, that since most operators always uses a special set of commands, they never have trouble finding specific buttons. On the other hand, in many cases operators (as discussed above) are not familiar with all the ways in which they can execute a command.

Hence, the problem with difficult menu structures seem to be a bigger issue for technicians, who more seldom uses the menus, and when they do so are to work with a wide range of features. For them, it

31

can sometimes be hard to find specific commands, but this is not really an issue seen from the big picture.

As discussed earlier, operators often use the same menu each time they carry out a specific task. This is probably the main reason to why they don’t think it is hard to navigate in the menus, except for occasions when they actually have to find a command they don’t normally use. When that happens, operators use the TID in most cases, and in some cases the dropdown menu, to logically search through the menu structure in order to find the right button. The operators are in most cases happy with the way different commands are categorized, and they find it quite easy to logically navigate through the different menus.

Insights from the Test Process Even though observations appear to be central for a study in what could be addressed as an “extreme” environment, this is less applicable if the test monitor already is familiar with the contextual environment. Since that was the case in this study, the observations became more or less a way to structure information I was already aware of. However, in a study were the test monitor does not have any pretest experience of the environment, observations are a vital part of the usability tests since it provides a lot of information on links between system design and the working environment.

An overall experience from this study is that the contextual cognitive walkthrough was the single most important way of data collection. The contextual cognitive walkthrough provided both answers to predefined questions, as well as information on random issues addressed by the users. These characteristics made the test most valuable.

The performance measurement seems very straightforward on the paper. However, when performing the test it was obvious that this was not the case. The main reason for this is that operators sometimes had a hard time finding explicit functions in a menu. Since the test was designed to measure how fast different menus are, and not the operator’s system knowledge, it was sometimes difficult to make a fair assessment of the menus efficiency. However, the test results indicate an overall result, but as always, it is important to keep the sources of errors in mind when evaluating the data.

Lessons for the future

Overall, the chosen test methods worked out quite well, and gave a lot of important insight on the usability of Cetris’ menus. However, for future studies it would be advisable to before hand, and in a more detailed way, analyze how the quantifiable data can be represented. In this study, a lot of exceptions were discovered that had not been explored in the pilot tests. For example, many users carry out one single task in a mixture of menus instead of in a single menu. Hence, it became problematic to show which menu that was most frequently used. The fact that different menus have a different range of functions also affected the data collection. In a future study, it would be advisable to beforehand analyze how this “asymmetric” data can be represented in an objective way.

32

Recommendations

In the following, recommendations built on conclusions drawn in the previous chapter will be

presented. The recommendations are categorized into three different groups; recommendations

regarding system design, recommendations on how to improve system usage and finally

recommendations on how user centered design could play a bigger role in the system development at

Saab Systems. The recommendations are unbiased, in rescpect to external restrictions such as

financial- or security aspects. As an introduction, a short summary is presented.

Recommendations in Short

• Reduce the complexity of some function menus in the TID for example by using color to emphasize grouping of functions.

• Provide a navigation tool in the TID (primarily for technicians). • Use the MTD-menu exclusively for status checks and keep it as a solid menu. • Decrease the number of functions in the dropdown menu and avoid submenus. • Follow a consistent terminology. • Give forms a less complex design, for example by separating settings and actions in different

tabs. • Provide scaling of forms, alternatively a “form-bar” in the lower part of the MTD. • Provide the possibility to save a “form view”. • Enable users to save their quick-TID. • Remove all preprogrammed functions in the quick-TID for gun control- and air warfare

officers. • Enable operators to program their own hardware shortcuts on the keyboard. • Remove the drag and drop button or give it another functionality. • Give operators extensive formal training on the system. • Implement a structured approach to user centered design.

Recommendations regarding System Design This sub chapter will present a number of recommendations on how the design of each menu in Cetris can be improved in the future. Notice that a lot of the recommendations have characteristics that allow for a personalization of the different menus. This is not very radical seen from a usability perspective, since personalization is a big usability trend at the moment. However, one of the main features of the Visby corvettes in general, and the Cetris system in particular, is its uniformity. A user should be able to sit down at whatever MFC available and log on to it to carry out his or her task. Hence, the challenge with the recommended personalization features is to implement them without compromising the conformity of the system at large.

TID

The TID menu has proven to be both the most frequently used menu and also one of the fastest ways to execute a command. However, a number of things can be done to improve the usability in this menu. This study has shown, that parts of the TID menu need more work than others. This specifically applies to the sub menus handling the weapon control director and the gun, i.e. the menus which gun control- and air warfare officers work in. The problem with these menus seem to be that they are too

33

crowded, and that operators need to switch between different sub menus when carrying out a singe task. To illustrate this problem, the TID-menu below to the left shows the complex sub menu for the weapon control director, while the right TID shows the submenu for target handling.

Figure 16 Two different sub menus

There are different ways to tackle this problem. To start with, one must answer the question; is it the complexity of the gun- and weapon control director functions that makes these menus much more complicated then other menus, or is it just a question of less good interface design? If the problem is the interface design, a usability study focusing only on the usage of weapon control director and gun must be done in order to understand how the interface design can improve. If the problem is the complexity, something must be done to give these menus a less confusing design without changing the current structure of the buttons. One way to accomplish this is to use colors. Functions that “belong together” but for some reason are located in different places, can have a color code to emphasize its context.

Regardless of what way one chooses to improve the interface of the gun and weapon control director, this study clearly shows that such an improvement is necessary. However, as discussed later in this chapter, a change in the quick-TID settings for operators handling gun and weapon control director can also facilitate their work.

Another issue regarding the TID, which was examined in the usability tests, was whether its complexity made it hard to navigate in. This was seldom the case for operators, who usually only handles functions in a limited amount of menus. However, interviews with technicians onboard showed that they sometimes face this problem. Hence, it is recommendable to in some way provide a navigation tool that helps operators/technicians to find different commands. This could be a dialog box in which the operator/technician can write the function they are seeking, followed by a tree structure which displays the way to the asked function.

MTD

The study has shown that the MTD is mainly used for status checks. Hence, it should be redesigned with that in mind. Today, the majority of the MTD-menu consists of function buttons and status fields sorted under different tab-forms. To make the menu more usable it should be designed for further extensive status checks, and diminish the number of function buttons available. When making status checks it is preferable if the operator doesn’t have to navigate in different tab-forms, but instead can glance at the MTD-menu and immediately find the information he or she is looking for. One way to achieve this, is to have for example three solid status fields on the upper half of the MTD, and tab forms with status fields which are connected to different operator roles. In this way, each operator can both see the solid status fields (which presents information which is relevant for all operators), and choose a tab status field that is connected to his or her role.

34

Today, there are also a number of function buttons in the MTD-menu. To some extent these may be preferable to keep, but if doing so, a study must be conducted to examine which functions that are so frequently used that they should have a short cut in the MTD. The function buttons presently provided in the MTD-menu seem quite randomly picked, and there is nothing that says that these buttons are of higher importance then other ditto.

With its current design, the MTD-menu takes up quite a lot of space in the MTD. In later versions of the system, the menu has a “pop-up” design, and is only visible when the mouse pointer is drawn to the edge of the display. However, as discussed earlier, the biggest advantage of the MTD-menu is its ability to provide status-data in a quick way and hiding the menu with a pop up function takes away this advantage. Instead, it is important to guarantee that a solid MTD-menu only contains information which is critical to the operator.

Drop Down Menu

The Dropdown menu appears to be quite effective, but the functions available through this menu need to be reassessed. As in the case with the MTD, the commands available in the dropdown menu seem to have been randomly picked. Some of them are of course frequently used, but others are not. The number of functions available has a great impact on the usability since a long dropdown menu both is hard to navigate in and also takes up a lot of space in the situation picture.

Another aspect of this menu is the fact that it sometimes has both one and two submenus. The study indicates that navigation through these can be difficult, especially in rough sea conditions. Hence, the recommendations for this menu is to lessen the functions available down to only frequently used ones, and if possible, avoid having sub menus in the Drop down list. The illustration below shows the three-level dropdown list in the situation picture.

In the current design, some names of the commands differ from names in the TID. This inconsistency is both confusing and unnecessary. A consistent terminology is definitely a core requirement to make a system as usable as possible.

Figure 17 Dropdown menu in three levels

35

Forms

The forms are probably not seen as a menu function, and hence, less thought seem to have been put into them than to other parts of the system interface. Even though developers are provided with different design guidelines, this does not guarantee a consistent design for the over 700 forms available in Cetris. To develop a more consistent and usable design of forms would require a whole different study. However, this report will present a few ideas to facilitate the form handling.

Firstly, less complex forms would be preferable. Currently, there is a mix between settings and actions in many forms. By separating these, e.g. by tabs, a less complex view could be obtained.

As explained earlier, most operators always have a number of specific forms presented in their MTD. However, in some cases there might just be a small part of these that is of interest to the operator. By providing a scaling functionality, which is not available today, operators could save a lot of valuable space in the MTD. An alternative to this would be a bar, which according to windows standards would show all forms currently opened in the MTD, even though the forms are covering each other.

In order to facilitate the work with forms, a possibility to save a “form-view” for each operator could be provided. This would be a bit like having the opportunity to save your own Quick-TID (a question which will be discussed in next paragraph), and not necessarily have to arrange the forms every time the operator logs on to the system.

Finally, a potential study on form handling could also preferably question whether all functions and settings in the forms really are necessary. Is it possible to reduce the complexity by implementing less frequently used settings in other ways then through forms?

Quick-TID

The quick-TID is very frequently used. Hence, the primary recommendation is to make it possible to save the quick-TID of a specific user, alternatively have a user1/user2 function with pre programmed quick-TID menus in order to decrease the start up time before system usage. (User1/User2 has been tested in an earlier version of Cetris, placed onboard the Stockholm class, and appear to be well regarded by the operators.) By choosing either one of these alternatives it would be possible to eliminate the time it takes to set up the quick-TID each time a user logs on to the system.

Another conclusion from this study is that many operators want to expand the functionality of the quick-TID in terms of how many function buttons that can be copied onto it. Hence, another recommendation is to expand the quick-TID area.

Finally, to enable the advantages of the quick-TID to all users, it is recommended to remove the all pre programmed functions for gun control- and air warfare officers.

Hardware Buttons

The hardware buttons today have a limited functionality. Since the personalization of the quick-TID is so popular, it is recommendable to let operators program the shortcuts of their choice on the key board as well, for example with the F- or Ctrl buttons. Currently it does exist keyboard shortcuts, but these

36

are very seldom used. (This issue will be discussed more closely later.) By letting users program their own shortcuts it is likely that the keyboard will be regarded as more dynamic and hence be more frequently used.

Concerning other hard ware buttons, most of them are frequently used. However, the drag and drop button placed beside the track ball is never used, and most operators are not even aware of its functionality. It seem to be a general opinion that one rather handle the track ball buttons in the same way as with a mouse, i.e. holds down the left track ball button for drag and drop. Because of this widespread opinion it is recommended to either remove the drag and drop button, or give it another functionality. One function that is very commonly used is the zoom in/zoom out. Thus, one suggestion could be to implement a zoom function instead of the drag and drop.

37

Recommendations regarding other aspects of the System

Usage Even though Cetris is based on earlier Combat Management Systems, it has still taken about four years to develop, and it has a complexity far higher than earlier systems onboard Swedish navy vessels. However, the general educational status among Cetris operators appears to be quite low. Most of the operators have not had any formal training on the system at all, and those who have, haven’t had it for more than about a week. Therefore, extensive formal training for operators is recommended.

For combat management systems, learning by doing might have been comme il faut as an educational method. However, with the complexity the Cetris system implies, this is not advisable. One of the main questions in this study was whether the system is sub optimized due to its redundancy. When officers learn to handle the system through the learning by doing approach, they might not find all ways in which a command can be executed. As discussed earlier and shown in figure 15, there is a difference between the menus regarding how fast the operator can interact with them. Hence, it can be concluded, that through the learning by doing approach, an operator might learn to use the system in a way that is not optimal. Furthermore, the learning by doing approach might also imply that operators never discover functions that are new in the Cetris system, compared to older systems, since they don’t know what to look for.

However, the solution to this problem is very simple, and hence, it is strongly recommended that operators be given proper training on how to fully handle the system. For a system as expensive as Cetris, the cost of formal training programs for the operators must be regarded as a good investment for the armed forces and an effective way to secure the most effective usage of the system. Furthermore, it is also important to stress the value of system usage on a daily basis in order for operators improve their skills.

Recommendations regarding how to improve the User

Centered Design process at Saab Systems “The only way to deal with the increased complexities of the future in command and control systems,

the huge amount of data available, reduced manpower and cost goals, and training in tactical

operations is to follow a user centered design process.”31

As mentioned in the beginning of this report, usability tests are only a small part of a proper user centered design process. In the following, a proposition will be made on why Saab Systems should work more systematically with UCD. These reasons are in short:

• Financial – USD decreases the number of last minute changes • A Unique Selling Proposition – USD can play a role when marketing the system • Competitors – Others are already working with UCD • Implementation – Some of the work is already done

31

Perry et al. (2000) s.1

38

Today, Saab Systems involves users in different phases of the product design cycle, from generating requirements to test the final product. The involvement of end-users is also a selling argument when marketing different combat management systems. However, the collaboration with users is not in any way standardized. Hence, a more structured way of working with end users, through ISO standards regarding Usability and User Centered Design, could be suggested. Together with the fact that these standards will help Saab Systems build better and more usable systems, it is also important to stress the different financial advantages such a work would imply; for example the decrease of last minute changes in the system design or the unique selling proposition a standardization like this implies. In the following paragraphs, a short presentation of the most central standards will follow, continued with a discussion on how and why this would give a competitive advantage if implemented in system design at Saab Systems.

ISO Standards for Usability and User Centered Design

Usability and User Centered Design standards can be divided up into 3 main categories32:

• Product usage characteristics • Product interface attributes • Development process

1. Standards dealing with product usage characteristics

The standard ISO 9241-11 (1998) Guidance on Usability defines usability as:

The extent to which a product can be used by specified users to achieve specified goals with

effectiveness, efficiency and satisfaction in a specified context of use.

The ISO 9241-11 is a central standard in usability, explaining how to identify the information that it is necessary to take into account when specifying or evaluating usability in terms of measures of user performance and satisfaction. Guidance is given on how to describe the context of use of the product (hardware, software or service) and required measures of usability. More important, it also includes an explanation of how the usability of a product can be specified and evaluated as part of a quality system (e.g. one conforming to ISO 9001)33. The fact that Saab already uses ISO 9001 for quality verification is a key factor to why Saab in an easy way could take on the work with usability standards.

2. Standards dealing with product interface attributes

These standards deal with characteristics of the product itself and can be used to specify and evaluate details of the user interface of the product and how it works. However, it is important to note that they must be interpreted and applied based on the context of use of the particular product in question. Examples of standards like this are ISO 9241-3 (1993) Visual display requirements, ISO 9241-4 (1998) Keyboard requirements or ISO 9241-10 (1996) Dialogue principles.

3. Standards dealing with the product development process

The ISO 13407 (1999) Human-centered design processes for interactive systems is probably the most important standard in this area and provides guidance on human-centered design activities throughout the development life cycle of interactive computer-based systems. It is a tool for those managing

32 www.usabilitypartners.se (081010) 33

www.usabilitypartners.se (081010)

39

design processes and provides guidance on sources of information and standards relevant to the human-centered approach34.

Human-centered design is described as a multidisciplinary activity, incorporating human factors and ergonomics knowledge and techniques with the objective of enhancing effectiveness and efficiency, improving human working conditions, and counteracting possible adverse effects of use on human health, safety and performance.

This standard describes four essential user-centered design activities that should be planned for and undertaken in order to incorporate usability requirements into the development process. These are35:

* understand and specify the context of use

* specify the user and organizational requirements

* produce designs and prototypes

* carry out user-based assessment

The activities are carried out in an iterative fashion, with the cycle being repeated until the particular usability objectives have been attained. The plan should identify how these activities can be integrated with other development activities, as well as people responsible for them.

Figure 18 The Human Centered Design Process according to ISO

Why should Saab implement these standards?

Besides the fact that user centered design implies a better product portfolio, there are four main reasons to why Saab should implement these standards in the company’s system development life cycle.

34

www.usabilitypartners.se (081010) 35

www.usabilitypartners.se (081010)

40

Financial

The first reason is financial. A number of studies have come to the conclusion that, despite larger initial costs in the system development, user centered design reduces errors, enhances the usability of the system and minimizes costly last minutes changes. Altogether, the reduction of errors and changes reduces the total cost of the system development, while the enhanced usability gives the product a competitive advantage.

A Unique Selling Proposition

The second reason is also somewhat financial, but focuses on the unique selling proposition a usage of the ISO standards for usability will imply. Today, when marketing a system, Saab Systems points out the fact that end users are involved in the system design process. However, it is not clear how, and to what extent, this is done. If integrating the standards mentioned above, it is possible to certify a user centered design approach through Saabs already existing ISO 9001 certification. Hence, a certified user centered design approach would be a solid unique selling proposition when marketing a product.

Competitors

The third reason to implement a user centered design approach into the system development is the fact that competitors do it. For example, the current development team of the American multi-mission destroyer Zumwalt (DDG 1000) has adopted the user centered design approach explicitly, in everything from ergonomics to combat managemet systems36. The increased importance of usability engineering, both to proof the products quality and to reduce costs, makes it a fact that Saab Systems likely cannot afford to not adopt a user centered design approach.

Implementation

The fourth reason to why Saab Systems should adopt a user centered design approach is the fact that a lot of the work is already implemented to some extent. As mentioned before end users are today engaged in everything from developing requirements to test the final product. However, this work needs a more structured, and detailed, approach to reach its full potential. Following the standards suggested above could accomplish this, and furthermore; the standardized work can later be certified via the ISO 9001-work at Saab Systems.

Additional reasons

The implementation of a more user centered design approach is likely to give Saab a number of advantages. Apart from the reasons stated above, other factors such as the ongoing development of a new System Development Handbook or the possibility to employ experts on the subject from both Uppsala University and The Royal Institute of Technology (which both has departments for UCD), speaks for a change in Saab Systems development life cycle.

36

Quintana et al. (2007) s.1

41

Future use of Conclusions and

Recommendations A description of the recommendations rendered in this study was presented in the previous chapter.

This chapter will further discuss in what way these recommendations can be adapted at Saab Systems,

and what facilitators or difficulties that are associated with the specific recommendations.

Saab Systems is presently working on the next generation of combat management systems, which will have different system architecture then the current ones. From a usability engineering perspective, the biggest difference will be the Data Distribution System-technique (DDS) that will (among other things) allow a more dynamic approach when designing interfaces. Today, redesigning the layout of a menu is both complex and time consuming. With the new DDS-technique, redesign of menus can be done with a drag and drop-maneuver. This change in system architecture will provide the possibility to easily test different design suggestions, preferably with end users.

DDS also facilitates a lot of the changes in interface layout that is recommended in this report. Hence, it is likely that recommendations regarding layout could be implemented quite easily into the next generation of combat management systems. However, for the Cetris versions placed onboard the Visby-class today, changes according to recommendations in this thesis work will be harder to implement, but not at all impossible. Hence, Saab Systems together with the Swedish Defense Material Administration must decide whether the different recommendations should be implemented in a short or long perspective.

Concerning the recommendations regarding complementary formal education for operators, this is not really up to Saab Systems, but rather a question for the Swedish Defense Material Administration or The Armed Forces. However, the fact that there seem to be a lack of competence among operators is a strong selling point and from a business perspective, this is of course interesting.

The last part of the recommendation chapter handled a suggestion on how Saab Systems could implement ISO-standards for user centered design and the benefits they could acquire in doing so. As discussed in the beginning of this thesis, usability engineering is many times seen as an additional cost, while the truth is that it actually reduces the development costs of a product. Since Saab Systems is part of the military industrial complex, business is often done in another way then in ordinary business to business-, or business to consumer-companies. Hence, it could be argued that the cost of a user centered design process is not necessary since usability aspects for different reasons has very little effect when marketing the system. However, in the long run, it must be the aim for a company like Saab Systems to have as good and usable products as possible, and hence it is important to start a process like the one suggested in order to not give competitors to large a lead in terms of user centered design processes.

42

Reflections

This master’s project started out with the idea to perform usability tests onboard a navy vessel,

preferably in difficult weather conditions such as rough sea, in order to collect usability data on

Cetris’ menus in its true context. This chapter provides some reflections on the insights gained during

the process.

The initial idea turned out to have some implications. Since Cetris is a very large and complex system, it first appeared impossible to limit the amount of functions tested in the study. However, it was soon clear that the functions accessible in multiple menus are not a larger amount then possible to handle in a study of this size. Hence, comparison tests between the different menus were possible to conduct, and was done so quite successful.

Regarding the context of the study, the tests were carried out onboard, but however not in bad weather conditions. Even so, the fact that the tests were conducted in the real operator environment instead of in a laboratory, has definitely affected the validity of the tests in a positive way.

However, the biggest mistake done in this master’s project was to initially neglect how the gathered data should be represented. Even though a lot of useful data was collected, it was in some cases very hard to present in a clear way, e.g. in a diagram or a table. Nonetheless, it was still valuable data used when designing both conclusions and recommendations and since this was primarily a qualitative study, the implications this error had was mainly a headache for the student, rather than a question of validity or reliability of the study.

Nevertheless, despite implications along the way, it could be said that this master’s thesis has fulfilled its purpose; to gain knowledge about some of the most important usability aspects of menus in the

Cetris system.

Finally, it is important to acknowledge the positivism and enthusiasm, which users participating in this study have shown. It is clear that they are eager to give their input in order to enhance the system’s usability, and it is recommendable that Saab uses this important source of information in future projects in order to develop more usable systems.

43

References BOOKS:

BELL J. 2005 Introduktion till forskningsmetodik, Studentlitteratur. ISBN 978-91-44-04645-7.

RUBIN J. 1994. Handbook of usability testing- how to plan, design and conduct effective tests. John Wiley & Sons, Inc. ISBN 0-471-59403-2.

SHARP H., ROGERS Y. AND PREECE J. 2002. Interaction design : beyond human-computer interaction.

John Wiley & Sons, Inc. ISBN 0-471-49278-7.

ARTICLES/THESIS WORK:

ALBINSSON P-A. & FRANSSON F. Communication Visualization – An aid to military command and control evaluation, Human Factors and Ergonomics Society Annual Meeting Proceedings, Computer

Systems , pp. 590-594

BROMS M. Utformning av menyer och knappar för pekskärmar – en utvärdering av delar av ett marint ledningssystem, Royal Institute of Technology, 1999

FINK G. ET AL. Naval Crew Workload Monitoring and Visualization, Computer Science

Department,Virginia Tech

GABBARD J. ET AL. Usability Engineering for Complex Interactive Systems Development, Proceedings of Human Systems Integration Symposium 2003, Engineering for Usability, Vienna, VA,

June 23–25, 2003

HEACOX N. ET AL. Real-time online comunications: ´Chat´ Use in Navy Operations, Space and Naval

Warfare Systems Command, 2004

MCCAULEY M. ET AL. The High-Speed Navy: Vessel Motion Influences on Human Performance. Naval Engineers Journal, Vol.1, 2007 s.35-44

MITTU R. & SAGRIA F. Common Operational Picture (COP) and Common Tactical Picture (CTP) Management via a Consistent Networked Information Stream, Naval Research Lab., Washington, DC.

13., 2000

PERRY A. ET AL. The solution for future command and control: human-centered design, Proc. SPIE

Vol. 4126, 2000, p. 42-53

QUINTANA V. ET AL. User-Centered Design in a Large-Scale Naval Ship Design Program: Usability Testing of Complex Military Systems. Naval Engineers Journal, Vol.1, 2007 s.25-34

44

WILLIS R. Tactical Tomahawk Weapon Control System User Interface Analysis and Design,

University of Virginia, 2001

OTHER PUBLICATIONS:

SAAB SYSTEMS, MMI Guidelines for Cetris, Saab System, 2004

WEB SITES:

www.c4i.org:

http://www.c4i.org/whatisc4i.html (081006)

http://jthom.best.vwh.net:

http://jthom.best.vwh.net/usability/perfmeas.htm (080703)

www.stc.org:

http://64.233.183.132/search?q=cache:Rk2CLyF7DAwJ:www.stc.org/edu/53rdConf/dataShow.asp%3FID%3D148+%22user+centered+design+at+hp%22+STC&hl=sv&ct=clnk&cd=1&gl=se (081030)

www.usabilitypartners.se:

http://www.usabilitypartners.se/usability/standards.shtml (081010)

www.wikipedia.com:

http://en.wikipedia.org/wiki/Cognitive_walkthrough (081023)

http://en.wikipedia.org/wiki/Visby_class_corvette (081006)

45

Appendix A – Test Plan Purpose

Cetris is a very complex combat management system in which the operator can execute different commands in a number of ways, e.g. in different menus. The purpose of these tests are to examine the level of usability in Cetris’ menus and how conclusions regarding this can be implemented into this, and future, combat management systems.

Problem statements/Test objectives

(CCW=Tested during Contextual Cognitive Walkthrough, PM=Tested during performance measurement)

Are TID- MTD- or DD-menus used more frequently? (CCW)

How long does it take to execute the ….. command in the different menus? (PM)

Which menu does the operator prefer when executing the ….. command? (CCW)

Are commands executed in different ways by the same operator? (CCW)

Can commands be hard to find due to the complex structure of the menus? (CI, PM)

What functions does the operator have in his/her quick-TID? Why? (CCW)

Specific thoughts about any of the menus/commands? (CCW)

Does the operator ever use the HW-bottoms to execute commands? (CCW)

Commands to be tested:

Create manual target

Range +/-

Range km

Centre own ship

Hooked category

Hooked ID

Report to/Cyclically

Correlation

Routes

Com info cat

Received messages

Text message

Target info

SRr

NRr

Intercept

46

User profile

A total of minimum 10 users will be tested during week 837-840 in Karlskrona. The users in this test should be personnel assigned different roles in the combat central onboard HMS Helsingborg. In case it is not possible to recruit enough operators from Helsingborg, operators with the same tasks onboard other ships also constitutes the target group, and could be asked to take part in the study. The users may have different level of training on the Cetris-system, but should however not be beginners.

Method

The usability test will consist of the main contextual cognitive walkthrough together with the performance measurement. In addition, a written posttest is conducted in order to gather more subjective data about the usage of menus in Cetris.

The tests are composed in the following five sections:

1. Participant greeting and background questionnaire

Each participant will be personally greeted by the test monitor, and made to feel comfortable and relaxed. The participants will be asked to fill out a short questionnaire that gathers basic background information.

2. Orientation

The participants will receive a short, verbal, scripted introduction and orientation to the test, explaining the purpose and objective of the test. They will be assured that it is Cetris, and not themselves that is the center of evaluation, and that they should perform in a way that is typical and comfortable to them. The participants will be informed that they are observed and videotaped.

3. Contextual Cognitive walkthrough

After the orientation the participant will be asked to sit down at the MFC. The test monitor will read the scenarios to the operator and observe, record and note in what way the operator execute the commands. During the process, the test monitor can ask the operator to explain his/her actions, raise additional questions, or in other ways interact with the participant. The participant will be videotaped in order to get record for verification.

4. Performance measurement

In this test, the test monitor will ask the operator to execute commands in specific menus. During this test, elapsed time will be noted for each unique task on the task list. The test monitor will also take notes about relevant participator behavior, comments and any unusual circumstances that might affect the result. The participant will be videotaped in order to get a record for verification.

5. Participant debriefing & Posttest questionnaire

After all the tasks are completed, each participant will be debriefed by the test monitor. The debriefing will include the following:

• Filling out the posttest questionnaire. • Participants overall comments about his or her performance. • Participants responses to probes from the test monitor about specific errors or

problems during the test.

47

Task/Scenario list

Contextual Inquiry/Cognitive Walkthrough:

In this test, notes is to be taken on how the command is executed, e.g. by which menu, if the operator is having a hard time finding the right button etc. Some of the actions below are to be seen as tasks, since they can be carried out very quickly, while other rather can be called scenarios since they acquire a series of proceedings in order to be executed.

Create a new surface target and set it to hostile corvette, course 090, speed 5 knots.

Change the range to 160 nm.

Centre own ship.

Look at the external text messages you have received.

Change the category of your target to air.

Change the category of your target to air.

Change the ID to own.

Send this target cyclically to all other units.

Create a new manual target. Correlate this one with the first one.

Create a route with 5 waypoints, name it and save it.

Look up the current callsign for HMS Visby.

Send a text message to HMS Visby.

Check course and speed for target number XX.

Select to present the SRr and adjust the thresholds for that specific radar. (Show on grey buttons.)

Describe a scenario common to you and explain what you are doing.

Performance measurement:

In this test, the time it takes for the operator to execute the command is noted by the test monitor.

Open the Com Info Catalogue using the MTD .

Open a text message form using the TID.

Open a text message form using the MTD.

Change the range to 20 km in the DD-menu.

Change the range to 30 km in the TID.

Open the form with externally received text messages from the MTD.

Offentre own ship with the DD-menu.

Centre own ship in with the HW buttons.

Open the form with externally received text messages from the TID.

Off centre own ship in the TID.

48

Hook a track and then change the category of the track using the TID.

Hook a track and then change the category of the track using the DD-menu.

Correlate two targets using the TID.

Decorrelate the new target using the DD-menu.

Change the range one step with +/- buttons in the TID.

Change the range one step with +/- buttons in the MTD.

Background questionnaire

These are the questions given to the operator in the background questionnaire.

Name?

Year of birth?

Highest education (civilian & military)?

Role onboard the ship?

Earlier working roles in which Cetris has been used?

Have you taken part in the development of Cetris? If yes, in what way?

How long is your formal education on the system?

For how long time have you worked with Cetris onboard?

Do you have any experience from other combat management systems?

In average, how many hours per week do you use Cetris?

How good are your general computer skills?

Are you right- or left handed?

What operation system are you most used to?

Written Posttest

These are the questions given to the operator in the background questionnaire.

Which one of the menus (TID, MTD, DD, HW) do you think is easiest to use? Why?

What strengths do you think the different menus have?

What weaknesses do you think the different menus have?

Do you always execute a command in a certain way, i.e. through the same menu, or is that dependant on the other tasks at hand?

Do you think redundancy is positive, i.e. the fact that commands can be executed in different ways?

Do you sometimes experience that it is difficult to find a specific command in the menus? If yes, how do you solve the problem?

What commands do you copy to your personal quick-TID? Why these? Are they always the same or are they dependant on the task at hand?

Do you think the commands are logically grouped in the different menus? If not, in what way would you like to change them?

49

Which ones of the hardware buttons do you use?

Which is the most common mistake you do when using the system?

Are there any usability issues in the system you would like to address?

How easy do you think it is to learn and use the Cetris system as a whole?

Additional comments?

Test environment/equipment

The test environment will be in the combat intelligence central of HMS Helsingborg. This is to make the tests as close as possible to the real conditions in which the operators work. The equipment used by the operator during the test is the MFC, e.g. the same gear as under normal operation procedures in the combat central.

Test monitor role

The test monitor will sit in the combat intelligence central with each participant while conducting the test. The test monitor will initiate tasks after initial setup, simulate the particular scenarios necessary for each task, record timings, and take notes during observations. The test monitor will not help any of the participants unless it is a question about the test procedures.

Evaluations measures

The following evaluation measures will be collected and calculated:

1. The number of times each menu is used when executing a command. (CCW) 2. The time it takes to execute a command. (PM) 3. The users subjective rating of the systems usability, ease to use certain menus etc. (CCW,

Written posttest)

50

Appendix B – Test Results Cognitive Contextual Walkthrough Name of participant:

AA

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x Too slow with DD

2 x +/- buttons

3 x

4 x Sort on the oldest and not the newest

5 x Want a bigger quickTID

6 x

7 x Inconsistent with Report to

8 x

9 x

10 x

11 x

12 x

13 x

14 Target handling: change cat/id, target number, correlation

Name of participant:

BB

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x x DD in the firm is hard to understand, easier with DD for Cat/Id

2 x This menu is always displayed

3 x x Either quickTID or HW

4 x x This form is always displayed

5 x

6 x

7 x

8 x

9 x

10 x

11 x

12 x

13 x

14 Uses the quickTID plus 2 other menus - easy to navigate

Name of participant:

CC

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x

2 x

3 x

4

5 x

6 x

51

7

8

9

10 x

11 x

12 x

13 x

14

Name of participant:

DD

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x DD fine when no rough sea

2 x Uses +/- buttons if there is not very big changes in range

3 x Is very fond of HW-buttons

4 x If the form might be needed, it is always displayed in the MTD

5 x

6 x

7 x

8 x Sometimes uses DD

9 x x x Uses route for bench marking, however not possible to see total distance

10 x

11 x

12 x

13 x

14 Keyboard to far to the left. Want color and shape to be able to handle gun and dir functions Inconsistency in forms; sometimes a yellow “bock” means something positive, sometimes negative, in the same form. A cleaner quickTID would mean better possibilities to personalize. Routes and blocking areas are activated when clicking close to them. This is very stressful, for examples in situations whit incoming air targets. Possible ways to solve this: possibility to lock routes/areas in a form, alternatively work with layers which can be locked.

Name of participant:

EE

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x

2 x

3 x

4 x

5 x

6 x

7 x

8 x

9 x

10 x

11 x

12 x x The form is always displayed, but if not she uses the MTD.

13 x

14 Cannot select which lost targets to delete, either one or all. Does not like that the forms closes if not moved from the upper right corner.

52

Name of participant:

FF

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x

2 x +/- buttons

3 x Also has it on the quickTID because of “standard settings”

4 x Often uses the form if it is displayed

5 x Sometimes also uses the form

6 x

7 x

8 x

9

10 x Often asks Gnist or read from paper notes posted on the MFC

11 x

12 x

13 x

14

Name of participant:

GG

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x Uses DD for surface targets. Would not do the same with air targets. Simpler to find cat in DD than in form.

2 x +/- buttons, uses maximum 3 ranges

3 x

4 x The form is always displayed since YSB is in charge of Madat.

5 x

6 x The form is always displayed when analyzing a target

7 x

8 x

9 x Might use DD now that he knows it is possible

10 x Often uses paper notes

11 x Uses the All e1ternally sent form which is always displayed

12 x Double click and check form

13 x QuickTID since often alternating between radars.

14 Want to be able to select what alarms to receive. Does for example not want alarms from the nav-system

Name of participant:

HH

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x x Never uses “HW apply”.

2 x Uses the predestined range buttons it she knows the exact range wanted. Otherwise +/- in the MTD.

3 x

4 x The form is usually displayed. Sorting order: oldest on top.

5 x

6 x

7 x

53

8 x Never uses the multi hook button.

9 x Uses the Ins-button to indicate the different positions.

10 x Comm-tab.

11 x Selects New Message from the communicaiton form which is always displayed.

12 x x Double click+form if it is not hooked, checks the MTD if the target is hooked.

13 x

14 Sorting orders that are not saved.

Name of participant:

II

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x Always in the same way.

2 x x Usually put up different range-butttons in the quickTID. Prefers these to the +/- buttons.

3 x Prefers this since the hand is always on the roll ball.

4 x Usually have this form displayed.

5 x

6 x

7 x

8 x

9 x Never does this. However comfortable with looking for a new command in the DD-menu.

10 x Comm-catalogue form

11 x Usually have this form displayed.

12 x

13 x Never changes the settings for different radars.

14 Would like more shortcuts, for example for new target, correlation and “stöttning av plott”.

Name of participant:

JJ

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x Uses DD for everything but course and speed. The sorting of categories are much better in the DD. Always uses the enter button to apply.

2 x Uses manual range.

3 x

4 x Always have the form displayed.

5 x

6 x

7 x x Always has the form displayed. If not, quickTID.

8 x

9 x x Uses DD if “optical” way points, otherwise the form.

10 x

11 x The form is always displayed.

12 x Takes almost no more time to double-click, but the form gives much more info.

13 x quickTID if it concerns surface radar, otherwise TID. The radar settings form is always displayed.

54

14 Want to be able to delete targets from the target list. Wants more functions in the forms. “Rr settings is not located under the menu Radar Selection, only under SRr. Alarms should be adjusted for the role of the specific user. Technical alarms should for example be directed only to SyteO. (Could be regulated by “main functions”. There are only three measuring vectors, and ought to be more. Definition of manual target; why is it not possible to define settings like id/cat etc. here? Where is the target after you press OK? Don´t like that forms closes themselves if not moved in the MTD.

Name of participant:

KK

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x Track type i DD.

2 x Thinks it gives better feedback with numbers instead of +/-.

3 x

4 x The form is always displayed, but if not he uses the MTD.

5 x

6 x

7 x

8 x Does not use the multi hook button. If you are in charge of the surface pic, the form is always displayed.

9 x x x Always SW-apply, however uses Ins.

10 x Prefers a paper note.

11 x

12 x Single click.

13 x

14 The sonar forms are way too big. To much unnecessary information. Routes can be sent to the Nass, therefore Surv areas are always made as a route instead of an area. Would be preferable wiith color to highlight important posts in the message list.

Name of participant:

LL

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x Uses the DD as far as possible. Enter to apply. Hard to sort on type in the form.

2 x +/- buttons

3 x DD in order to not loose focus of the MTD. Sometimes uses HW-buttons.

4 x

5 x

6 x DD since it doesn´t matter what menus that are displayed on the TID for the moment.

7 x Does not like that the target isn´t added in the form. “Report to” sounds like “send to” in Windows.

8 x Does not use the multi hook button.

9 x Why is the form still displayed when the route is closed?

10 x Comm-tab.

11 x

12 x Single click.

13 x

14 New Tactical Area: The name disappears when adding a position. Map Objects: When choosing Map Coverage is it impossible to read the names of the different maps. Also hard to see all map-names in Map Sheet.

55

Name of participant:

MM

Task TID MTD DD HW Forms Quick-TID

Comments

1 x x Enter to apply.

2 x Always uses +/- buttons.

3 x

4 x

5 x

6 x Always uses the TID-menu if the right sub menu is displayed. Uses form if lots of things need to be changed.

7 x

8 x

9 x

10 x

11 x

12 x

13 x

14 The menus for Air weapons are poorly designed. You have to switch between different sub menus when handling an incoming air target. Too many fixed buttons in the quickTID.

Performance measurement Name of participant:

AA BB CC DD EE FF GG HH II JJ KK LL MM Average time

Task Time

1 7,00 17,10 4,70 3,50 15,20 4,80 2,50 4,90 8,90 6,70 7,40 3,80 3,50 6,923076923

2 1,10 0,50 9,40 3,20 6,05 3,10 8,80 5,60 4,70 10,00 1,20 2,00 3,30 4,534615385

3 4,70 10,00 3,70 2,60 23,80 3,60 28,60 3,00 24,30 13,90 18,40 2,90 7,40 11,3

4 4,10 5,30 4,50 10,70 7,70 5,60 6,30 12,50 5,50 7,10 6,90 9,60 6,70 7,115384615

5 3,30 1,40 4,30 3,80 4,90 3,80 30,00 3,00 6,40 4,90 3,40 3,80 4,00 5,923076923

6 0,40 20,00 3,90 1,70 26,70 30,80 3,50 22,20 2,70 21,30 15,90 2,00 6,00 12,08461538

7 4,00 5,80 18,30 6,80 9,70 7,80 9,00 1,60 4,20 3,40 3,00 9,50 20,80 7,992307692

8 2,50 3,50 3,20 5,70 18,30 3,90 4,00 2,70 8,60 4,80 4,00 1,40 8,50 5,469230769

9 0,50 0,60 0,40 0,70 0,60 0,90 0,90 0,30 0,30 0,20 0,70 1,10 1,60 0,676923077

10 0,60 3,30 24,30 5,90 30,30 4,50 23,40 6,70 11,30 6,70 6,40 10,00 5,30 10,66923077

11 2,30 8,90 7,90 6,80 10,30 7,30 4,80 20,40 2,80 3,60 2,60 4,40 6,841666667

12 3,80 4,50 5,90 4,60 7,60 5,20 6,50 4,90 7,60 6,40 5,50 2,90 5,60 5,461538462

13 4,00 6,10 4,80 4,60 7,00 6,80 1,30 8,30 3,50 3,80 3,00 3,60 4,733333333

14 20,00 15,50 12,60 16,00 15,70 7,70 12,50 9,80 5,20 9,30 6,30 16,10 12,225

15 3,00 3,90 6,80 6,10 5,90 5,20 15,10 6,50 7,70 4,30 14,70 5,20 4,00 6,8

16 0,80 2,70 1,30 0,10 1,80 1,70 1,40 0,50 9,60 1,00 1,30 1,30 0,30 1,830769231

62,1 109,1 90,7 80,7 185,95 113,9 161,8 93 140,3 102,2 105,5 67,4 101,1

56

Observation Protocol Space. What is the physical space like, and how is it laid out? The combat intelligence central is quite crowded with 12 different operator consoles. There are no window, but normal in-house lighting is provided (compared to old CIC´s). Actors. What are the names and relevant details of the people involved? 12 different operators plus one technician is present in the CIC at all times. The operators are both officers as well as conscripts. Activities. What are the actors doing and why? Collecting information from different sensors in order to get a situation picture as accurate as possible. If necessary, manage weapons and counter measures. Objects. What physical objects are present, such as furniture? 12 Cetris consoles. Paper notes at each console. White boards on the walls displaying different ship data. Acts. What are the specific individuals doing? Everyone is very concentrated on their individual task. However some communication take place through the intercom system. Events. Is what you observe part of a special event? At the day for observations, exercises in air warfare are conducted. Goals. What are the actors trying to accomplish? Collecting information from different sensors in order to get a situation picture as accurate as possible. Seek perfection when managing weapons and counter measures. Feelings. What is the mood of the group and the individuals? Everyone is quiet and very much concentrated.

Summary Pretest questionnaire

Year of birth?

A. 1980 B. 1988 C. 1988 D. 1975 E. 1985 F. 1988 G. 1982 H. 1978 I. 1980 J. 1973 K. 1977 L. 1968 M. 1982

Highest education (civilian and/or military)?

A. TaP B. High School C. High School D. TaP E. 2,5 years of MSc, Chemical Engineering F. High School G. High School H. TaP I. TaP J. TaP K. TaP L. FHS SP M. MHS

57

Role onboard?

A. Electronic Warfare Officer B. System Technician C. Electronic Warfare Operator D. Air Warfare Officer E. System Technician F. System Technician G. Surface Warfare Officer H. Sea Trials Unit Visby I. Electronic Warfare Officer J. Surface Warfare Officer K. Sub Warfare Officer L. Sea Trials Unit Visby M. Sea Trials Unit Visby

Earlier roles that have involved Cetris-usage?

A. Junior Electronic Warfare Officer B. – C. – D. Air Warfare Officer E. – F. – G. – H. Junior Electronic Warfare Officer I. Junior Electronic Warfare Officer J. – K. Sub Warfare Officer L. – M. –

Have you been involved in the development of Cetris?

A. No B. No C. No D. No E. No F. No G. No H. System tests in Järfälla I. SAT J. Written suggestions K. No L. C5 SSM, SAC M. During FAT, HAT and SAT

How long is your formal education on the system?

A. None B. One Week C. 2 Weeks D. None E. None F. 1-2 Weeks G. 4 h H. None I. None J. 3 days K. None L. 3 weeks M. None

For how long have you worked with the system onboard?

A. 3 years B. 4-5 months C. 2 months D. 3,5 years E. 5 months F. 4-5 months G. –

58

H. 2 years I. 5 months J. 4 years K. – L. 5 years M. 3 years

Do you have any experience of other CMS´s?

A. SLI 726/PC-Maril, SESYM B. None C. None D. PC-Maril E. None F. None G. PC-Maril H. PC-Maril I. Maril 2000, PC-Maril J. Maril 880 K. SESYM L. SLI 726, Maril 2000, Maril 880 M. None

How many hours per week (on average) do you use Cetris?

A. >10 (at sea) B. >10 (at sea) C. >10 (at sea) D. 5-7 E. >10 (at sea) F. >10 (at sea) G. 0-2 H. 8-10 I. 5-7 J. 0-2 K. 5-7 (at sea) L. 0-2 M. 0-2

How do you assess your general computer skills?

A. Good B. Very Good C. Good D. Good E. Good F. Good G. Good H. Basic I. Good J. Basic K. Basic L. Good M. Good

Are you right or left handed?

A. Right B. Right C. Right D. Left E. Right F. Right G. Left H. Right I. Right J. Right K. Right L. Right M. Right

What operation system are you the most used to?

A. Windows

59

B. Windows C. Windows D. Windows E. Windows F. Windows G. Windows H. Windows I. Windows J. Windows K. Windows L. Windows M. Windows, OSX

Summary Written Posttest

Which one of the menus (TID, MTD, Dropdown, HW-buttons) do you think is easiest to use? Why?

A. Easy in different ways/situations/purposes. Hard to compare. B. TID, since it doesn’t need the same precision as the MTD, and still have more functions than the HW-buttons. C. Dropdown since it makes it easy to get an overview over different functions. D. HW-buttons and TID. E. TID-fast to use, MTD-visible all the time, HW-center/off center buttons. F. TID, because I’m most used to that menu and it is the fastest. G. MTD and DD since they are easy to overview and demand a minimal change of focus for eyes as well as roll ball. H. TID. Easy to overview the structure; an arrow means that there are more choices. Describing headlines. I. DD. Usually easy to use since the hand is always on the track ball. J. QuickTID, DD, HW K. TID together with HW-buttons. They are distinct and gives a good overview. L. DD since the layout is easy to learn. Furthermore, the marker is always in the MTD, and operators focus does not have to leave

the situation picture. M. TID, since you get a good overview.

What strengths do you think the different menus have?

TID:

A. Quick-TID plus the ability to get an overview of the choices. B. Fast, simple, big buttons and an easy menu system. C. It has all functions. D. Good usability in bad weather, easy to “punch”, good feedback (yellow). E. The quickTID. F. Fast, good to have the quickTID. G. Possible to personalize. The amount of functions. H. Easy to overview the structure; an arrow means that there are more choices. Describing headlines. I. All menus are available from here. J. The quickTID enables selected functions that are commonly used. K. Good overview. Likes the quick-TID. L. You can quickly get access to a lot of functions. M. A good overview.

MTD:

A. The ability to deselect information. B. Good for displaying information. C. Easy & fast, does not have to take my hand off the roll-ball. D. Does not have to take my eyes of the MTD. E. You can very fast see information about the targets. F. Good to be able to see course and speed etc. of a target. G. Easy to overview. H. A limited number of buttons, easy to overview. I. A good overview. J. It is in “focus” and you are able to keep your eyes on the situations pic at the same time as moving through the menu. K. Good for target information & TFC. L. It can be displayed at all times without interfering with the situation picture. M. Easy to get an overview.

60

Dropdown:

A. The selection of functions depending on where the right click is done. B. Good for changing id/cat on targets. C. Easy & fast, does not have to take my hand off the roll-ball. D. – E. Foreseeable when using new commands. F. A good overview of the different menus, easy to find your way. G. Fast to display. H. Usable if I for example am to change the category of a target. I. Quick. J. Easy to overview. K. – L. You can use it without loosing focus of the situation picture. M. Smooth to use once you have learned it.

HW-buttons:

A. Fast, do not have to switch menu. B. Good for center/off center. Otherwise unnecessary. C. Only uses two of them, but they are placed close and functions well. D. The hand is resting on the roll ball, fast and distinct. E. Center and Off center is placed conveniently. F. Easy to find. G. You know where they are. H. Fast way to Center/Off centre. I. In some cases possible to combine with the track ball. J. Visible, clear & simple. K. Simple. L. You can push them without actually looking at them. M. Easy to use, you cannot fail.

What weaknesses do you think the different menus have?

TID:

A. Sometimes unstructured. Too small quick-TID. B. – C. Lacks the possibility to save the quick-TID. Sometims a bit to much information. Sometimes hard to see the lower buttons. D. – E. A bit cluttered for those commands you are not used to. F. Sometimes you have to push a bit to many buttons to get to the right command. G. Many buttons can be hard to find. H. Can be hard to find “new” commands. I. Takes time to find your way in different menus. J. You have to find the right menu (among many). K. – L. Can seem a bit messy and requires the operator to change between menus. M. To many buttons for firing officers.

MTD:

A. Could have been more configurable. B. Too slow and unorganized. Hard to find the functions. C. Would be preferable to be able to place a special menu for the operator role here. D. Easy to accidentally chose the wrong command in bad weather. E. – F. You have to use the roll ball a lot. G. A lot of information in small letters, sometimes hard to select correctly. H. Have never used the “quick”-tab. I. Functions that belong together are placed in different places. J. A bit cluttred. K. – L. Is not possible to optimize according to the operators requirements. M. A lot of information that I don’t need.

Dropdown:

A. Very unstructured. Inconsistent names on functions (compared to TID).

61

B. Too slow. Takes to much time to navigate in. C. Easy to click the wrong command. D. Easy to accidentally chose the wrong command in bad weather. Hard to focus your eyes on the small letters. E. Not smooth to use when in a hurry. F. Not always easy. G. Sometimes to slow. Targets can end up in the wrong place. H. – I. – J. Functions are not entirely implemented. E.g. “report to” or “target id/type”. K. Cluttered, small text, easy to navigate wrong, takes up a big part of the window. Only to be used if thing are calm. L. Is too long since alternatives which are not selectable still are displayed. M. -

HW-buttons:

A. Could have been more configurable. B. Limited functionality. C. Would be good to customize HW-buttons, almost like a quick-TID. D. Limited amount of functionalities. E. – F. Too few functions. G. Solid. H. I only use Center/Of center and Ins. I. Limited usage because of placing outside “focus”. J. To few. K. – L. Limiting and hard to push. You have to let go of the trackball to use them. M. It would be enough with two track ball buttons.

Do you always execute a command in the same way, e.g. through the same menu, or does it depend on the task at hand?

A. In the same way, since I always solve the same problems. B. Almost always in the same way. Have most of the functions I need in the quick-TID. C. Since I work with EW I usually don’t use the TID for other things. D. Very different, depending on task and weather conditions. E. Almost always in the same way, except for specific occasions when there are smoother ways of doing different (???) at the same

time. (???) F. Usually the same way. G. It depends on what I’m doing at the moment. H. Usually the same way. I. Usually the same way, but depending on what other menus that are in use I sometimes use other ways. J. Depends on the task, but usually the same. K. Usually the same. L. It depends on what menu that is displayed in the TID for the moment, which object that is hooked, and what information that is

to be put into forms. M. It depends, but usually in the same way.

Do you think the redundancy in the system, e.g. that you can execute commands in different ways, is good?

A. Yes. It conveys possibilities for the operator to develop different work methods. B. Yes, however it would have been good if it were easier to navigate in the MTD. C. Yes and no. You could have less unnecessary commands in the TID if you have them elsewere. D. Yes. E. Yes, but it sometimes feel unnecessary. F. Yes, it is good since you have different choices. G. Yes, it is good. H. Yes, but at the same time unnecessary. I. Yes, it increases the usability of functions, since there is a number of menus that are used frequently. J. Yes, very good. K. Yes. L. Yes, it gives you flexibility. M. Yes.

Do you sometimes think it is hard to find a specific command in the menus? If yes, how do you solve the problem?

A. Yes. Practice, method development.

62

B. Very seldom. C. Yes, ask were they are. D. No. E. Yes, those you do not usually use. By methodologically clicking through the system. F. Yes, sometimes. Check the different menus and try to find my way or ask an officer. G. Yes. I look even more, try new paths. H. Yes. I try different paths. I. Yes. I look in the menu that is most likely, and try new paths. Alternatively, I use the DD menu. J. Yes. Find other paths. QuickTID solves al lot of problems. K. No. L. Yes, look around. M. Can be if I use functions I not normally use.

What commands do you usually copy to your personal quick-TID? Why these? Are they always the same, or do they vary with the task at

hand?

A. I have my own standard set up, since it makes operations fast and my task is always the same. B. Always the same. Target handling functions, radar settings, comm., range, center/off center. C. EW-functions, zoom +/- for cameras, SRr, NRr. D. Different, however usually buttons for target handling. E. Zoom function, air plot functions, director functions. However, always the same. F. The most commands that have to do with surface plot, for example Rr, new target, text message, range, etc. G. Rr, text message & sent/received are usually the same. H. It depends on the task at hand, but mostly commands in the communication- or target handling area. I. Range, Rr selection, Comm and functions from SRr depending on the task. J. Target handling, Rr, Comm, Dir K. Target handling, Pointers, Rr, Torpedo L. Target handling, correlation. M. Correlate/decorrelate. Would have had more if it wasn’t so many preprogrammed buttons in the quick-TID for Firing Officers.

Do you think the commands are logically grouped in the different menus? If not, how would you want to change them?

A. Definitely not in the Drop down menu. Ok in the TID. B. Yes. C. Yes. D. Yes, under the “Console” menu. No, under the “Air Weapons” menu. E. Yes. F. Relatively good. G. Yes. H. Yes, they are logically grouped. I. Yes. However, there’s a lot of them. J. Under Comms you can find setup + tracks. Move tracks to “Order Handling”. K. – L. Yes, they origin from different sensors and weapons in the different dimensions. M. Yes, everything except the firing- and director menus.

Which ones of the HW-buttons do you use?

A. The yellow ones. B. Sometimes center/off center, but usually does that in the TID. C. Off center/center own. D. All the yellow ones. E. Off center/center own. F. Ack, center/center own. G. Off center/ center own, Ins. H. Off center/ center own, Ins. I. Off center/ center own, Ack. Apply J. All except Ackn. K. All except Apply. L. Center/ off center, Ins T/B. Rather uses the keyboard, e.g. Ctrl-I. M. Apply, Center/ off center, Ins T/B

63

Which is the most common mistake that you do when using the system?

A. - B. I often have the wrong target hooked when using the adjust function. C. Problems with navigating in the TID. Click on the wrong command in the Dropdown menu. D. Not using the fastest way to a function. E. – F. Sometimes I accidentally delete a target instead of a vector because I use the hooked track button instead of the drop down menu. G. When it takes time to find commands I normally not use. H. I choose the “wrong” way. I. Not finding the right menu. J. Try to do things a bit to fast. Would like an “Undo” button. K. Don’t know. L. Hook the wrong track. M. -

Are there any flaws in the system, from a functionality- or usability perspective, that you want to comment?

A. Want to be able to change scale on displayed forms. B. No, I think it is well designed for the tasks I have. C. No. D. The “Eldledar”-functionality under Air Weapons. E. Unsmooth that windows disappear if not moving them. You cannot select from a list what lost targets to delete, it’s either one or

all of them. F. Not really. G. The angle of the keyboard. Want to be able to block some alarms, for example Nass-alarms. H. The order of received/sent messages. I. The MTD is unstructured; in the tab “Comms” should commands that belong to communication be, for example text message &

alarm symbol that are now located under the quick-tab. J. Simpler predestination functions for surface targets. Lack of “bänknings” function. Would like it to be possible to save more

tactical/surveillance areas. K. UDS /TAS (See notes from Test AA.) L. That all graphics that is displayed is available for all operators on their MTD. M. To many ways to adjust the system for the Firing Officer in terms of radio buttons and checkboxes.

How easy do you think it is to understand and use Cetris as a whole?

A. 5 B. 7 C. 7 D. 8 E. 7 F. 7 G. 7 H. 7 I. 7 J. 8 K. 8 L. 7 M. 6

Other comments:

A. Want to be able to develop hid own methods.

Want the system to remember how the forms are placed.

Would like a “quick-TID” for forms

Hard to standardize and develop user methods when not possible to save settings.

64

Want a larger quick-TID

B. – C. – D. – E. Easy to use the commands you’re used to, otherwise you eventually find your way. The fact that commands are located in

different places is good when you are to do “new” tasks (hence easier to find). F. In the beginning it can be hard to find your way in the menus, and find the easiest way to a command. But when you start to learn

the system you also learn the logic. G. – H. There are many functions that I have never used/will never use. Feels like I sometimes get lost in the menus, for example I might

think I have activated Countermeasures, but more buttons need to be selected if I an to activate the command. I. It is necessary for the operator to frequently use the system in order to achieve good results. The system is logically structured,

but the amount of menus and commands initially makes it hard to navigate in. Once you have obtained basic knowledge of the system, it is easy to use.

J. Easy for everyone to learn the basic functions. Takes a little more time and energy to really learn to use the system fully. K. Measuring vectors are not good. It is not possible to hook a target without the vector locking on the target. Would like to be able

to present radar video from different radars at the same time. L. Appreciates that a study is conducted, since other countries seem to have come a longer way in that area of research. M. -

65

TRITA-CSC-E 2009:056 ISRN-KTH/CSC/E--09/056--SE

ISSN-1653-5715

www.kth.se