hci notes

81
Human Computer Interaction HUMAN COMPUTER INTERACTION NOTES For IV/II SEMESTER Prepared by: Suresh.V, M.Tech Asst.Professor, Dept. of CSE, DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING LORDS INSTITUTE OF ENGINEERING AND TECHNOLOGY HIMAYATH SAGAR ROAD HYDERABAD-500008 Department Of Computer Science and Engineering Page 0

Upload: suresh-vanamala

Post on 02-Apr-2015

484 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: HCI Notes

Human Computer Interaction

HUMAN COMPUTER INTERACTION

NOTES

For

IV/II SEMESTER

Prepared by:

Suresh.V, M.Tech

Asst.Professor, Dept. of CSE,

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

LORDS INSTITUTE OF ENGINEERING AND TECHNOLOGYHIMAYATH SAGAR ROAD

HYDERABAD-500008

Department Of Computer Science and Engineering Page 0

Page 2: HCI Notes

Human Computer Interaction

Human Computer Interaction:

Introduction to HCI

HCI (human-computer interaction) is the study of how people interact with computers and to what extent computers are or are not developed for successful interaction with human beings. A significant number of major corporations and academic institutions now study HCI. As its name implies, HCI consists of three parts: the user, the computer itself, and the ways they work together.

UserBy "user", we may mean an individual user, a group of users working together. An appreciation of the way people's sensory systems (sight, hearing, touch) relay information is vital. Also, different users form different conceptions or mental models about their interactions and have different ways of learning and keeping knowledge and. In addition, cultural and national differences play a part.

ComputerWhen we talk about the computer, we're referring to any technology ranging from desktop computers, to large scale computer systems. For example, if we were discussing the design of a Website, then the Website itself would be referred to as "the computer". Devices such as mobile phones or VCRs can also be considered to be “computers”.

InteractionThere are obvious differences between humans and machines. In spite of these, HCI attempts to ensure that they both get on with each other and interact successfully. In order to achieve a usable system, you need to apply what you know about humans and computers, and consult with likely users throughout the design process. In real systems, the schedule and the budget are important, and it is vital to find a balance between what would be ideal for the users and what is feasible in reality.

The Goals of HCIThe goals of HCI are to produce usable and safe systems, as well as functional systems. In order to produce computer systems with good usability, developers must attempt to:

Understand the factors that determine how people use technology Develop tools and techniques to enable building suitable systems Achieve efficient, effective, and safe interaction Put people first

Underlying the whole theme of HCI is the belief that people using a computer system should come first. Their needs, capabilities and preferences for conducting various tasks should direct developers in the way that they design systems. People should not have to change the way that they use a system in order to fit in with it. Instead, the system should be designed to match their requirements.

Department Of Computer Science and Engineering Page 1

Page 3: HCI Notes

Human Computer Interaction

USABILITYUsability is one of the key concepts in HCI. It is concerned with making systems easy to learn and use. A usable system is:

Easy to learn Easy to remember how to use Effective to use Efficient to use Safe to use Enjoyable to use

Why is usability important?Many everyday systems and products seem to be designed with little regard to usability. This leads to frustration, wasted time and errors. This list contains examples of interactive products:mobile phone, computer, personal organizer, remote control, soft drink machine, coffee machine, TM, ticket machine, library information system, the web, photocopier, watch, printer, stereo, calculator, videogame etc….How many are actually easy, effortless, and enjoyable to use? For example, a photocopier might have buttons like these on its control panel.

Imagine that you just put your document into the photocopier and set the photocopier to make 15 copies, sorted and stapled. Then you push the big button with the "C" to start making your copies.What do you think will happen?(a) The photocopier makes the copies correctly.(b) The photocopier settings are cleared and no copies are made.If you selected (b) you are right! The "C" stands for clear, not copy. The copy button is actually the button on the left with the "line in a diamond" symbol. This symbol is widely used on photocopiers, but is of little help to someone who is unfamiliar with this.

Factors affecting usabilityThe main factors affecting usability are:• Format of input• Feedback• Visibility• AffordanceThe principles of visibility and affordance were identified by HCI pioneer Donald Norman.• Visibility is the mapping between a control and its effect. For example, controls in cars are generally visible – the steering wheel has just one function, there is good feedback and it is easy to understand what it does. Mobile phones and VCRs often have poor

Department Of Computer Science and Engineering Page 2

Page 4: HCI Notes

Human Computer Interaction

visibility – there is little visual mapping between controls and the users’ goals, and controls can have multiple functions.• The affordance of an object is the sort of operations and manipulations that can be done to it. A door affords opening, a chair affords support. The important factor for design is perceived affordance – what a person thinks can be done with an object. For example, does the design of a door suggest that it should be pushed or pulled open?

HCI and its evolutionThis section lists some of the key developments and people in the evolution of HCI. You will look at some of these in more detail during this course.

Human factors engineering (Frank Gilbreth, post World War 1) – study of operator’s muscular capabilities and limitations.

Aircraft cockpits (World War 2) – emphasis switched to perceptual and decision making capabilities

Symbiosis (J.C.R. Licklider, 1960’s) - human operator and computer form two distinct but interdependent systems, augment each other’s capabilities

Cognitive psychology (Donald Norman and many others, late 1970’s, early 1980’s) - adapting findings to design of user interfaces

Development of GUI interface (Xerox, Apple, early 1980’s) Field of HCI came into being (mid 1980’s) – key principles of User Centred

Design and Direct Manipulation emerged. Development of software design tools (e.g. Visual Basic, late 1980’s, early

1990’s) Usability engineering (Jakob Neilsen, 1990’s) - mainly in industry rather than

academic research. Web usability (late 1990’s) – the main focus of HCI research today.

Factors in HCIThere are a large number of factors which should be considered in the analysis and design of a system using HCI principles. Many of these factors interact with each other, making the analysis even more complex. The main factors are listed in the table below

Organisation Factors Training, job design, politics, roles, work organisation

Environmental Factors Noise, heating, lighting, ventilation

Health and Safety The User Comfort Factors Factors Cognitive processes and capabilities Seating,

equipment, Motivation, enjoyment, satisfaction, layout. personality, experience

Input devices, output devices, dgraphic

ialos, natur

User Interface gue structures, use of colour, icons, commands, navigation, al language, user support, multimedia,

Department Of Computer Science and Engineering Page 3

Page 5: HCI Notes

Human Computer Interaction

Task Factors Easy, complex, novel, task allocation, monitoring, skills Constraints Cost, timescales, budgets, staff, equipment, buildings

System Functionality Hardware, software, application

Productivity Factors Increase output, increase quality, decrease costs, decrease errors, increase innovation

Disciplines contributing to HCIThe field of HCI covers a wide range of topics, and its development has relied on contributions from many disciplines. Some of the main disciplines which have contributed to HCI are: Computer Science

Technology Software design, development & maintenance User Interface Management Systems (UIMS) & User Interface Development Environments (UIDE) Prototyping tools Graphics

Cognitive Psychology information processing capabilities limitations cooperative working performance prediction

Social Psychology social & organizational structures

Ergonomics/Human Factors hardware design display readability

Linguistics natural language interfaces

Artificial Intelligence intelligent software

Philosophy, Sociology & Anthropology Computer supported cooperative work (CSCW)

Engineering & Design graphic design engineering principles

EXERCISE

Department Of Computer Science and Engineering Page 4

Page 6: HCI Notes

Human Computer Interaction

1. Suggest some ways in which the design of the copier buttons on page 3 could be improved.2. For the following scenarios, map out what you do (USER INPUT) with the way the system seems to operate (SYSTEM FEEDBACK)• Buying the books “Human Computer Interaction (J. Preece)” and “Shaping WebUsability (A. Badre)” on the internet• Sending a text message on a mobile phone3. Consider factors involved in the design of a new library catalogue system using HCI principles4. Use the internet to find information on the work of Donald Norman and Jakob Neilsen

Department Of Computer Science and Engineering Page 5

Page 7: HCI Notes

Human Computer Interaction

Chapter 2

Human CognitionCognitive PsychologyThe science of psychology has been very influential in Human Computer Interaction. In this course we will look at some of the main developments and theories in cognitive psychology (the study of human perception, attention, memory and knowledge), and the ways in which these have been applied in the design of computer interfaces.

Cognition and Cognitive FrameworksCognition is the process by which we gain knowledge. The processes which contribute to cognition include:

understanding remembering reasoning attending being aware acquiring skills creating new ideas

A key aim of HCI is to understand how humans interact with computers, and to represent how knowledge is passed between the two.The basis for this aspect of HCI is the science of cognitive psychology. The results of work of cognitive psychologists provide many lessons which can be applied in the design of computer interfaces. These results are expressed in the form of cognitive frameworks. This section describes some of the important frameworks which have been developed by psychologists.

Human Information ProcessingHCI is fundamentally an information-processing task. The human information processing approach is based on the idea that human performance, from displayed information to a response, is a function of several processing stages. The nature of these stages, how they are arranged, and the factors that influence how quickly and accurately a particular stage operates, can be discovered through appropriate research methods.Human information processing analyses are used in HCI in several ways.

Basic facts and theories about information-processing capabilities are taken into consideration when designing interfaces and tasks.

Information-processing methods are used in HCI to conduct empirical studies evaluating the cognitive requirements of various tasks in which a human uses a computer.

Computational models developed in HCI are intended to characterize the information processing of a user interacting with a computer, and to predict, or model, human performance with alternative interfaces.

The idea of human information processing is that information enters and exits the human mind through a series of ordered stages (Lindsay & Norman, 1977), as shown in the figure.

Department Of Computer Science and Engineering Page 6

Page 8: HCI Notes

Human Computer Interaction

The Extended Human Information Processing modelThe basic information processing model shown above does not account for the importance of:• Attention the processing only takes place when the human is focused on the task• Memory the information may be stored in memory and information already in memory may be used in processing the inputThe figure below illustrates the extended human information processing model (Barber 1988). It shows that attention and memory interact with all the stages of processing

An important question when researching into memory is how it is structured. Memory can be broadly categorized into three parts, which have links between them, moving the information which comes in through the senses.

The Multi-Store model of memoryIn 1968, Atkinson and Shiffrin developed a model of memory formed of three 'buffers', which will store memories and control processes which move information between the buffers. The three stores identified are:• Sensory information store• Short-term memory (more recently known as working memory)• Long-term memory

Department Of Computer Science and Engineering Page 7

Page 9: HCI Notes

Human Computer Interaction

The Model Human ProcessorAn important concept from cognitive psychology is the model human processor (MHP) (Card, Moran, and Newell, 1983). This describes the cognitive process that people go through between perception and action. It is important to the study of HCI because cognitive processing can have a significant effect on performance, including task completion time, number of errors, and ease of use. This model was based on the human information processing model. The model human processor consists of three interacting systems. Each has its own memory and processor.

Perceptual processor Outputs into audio storage Outputs into visual storage

Cognitive processor Outputs into working memory. Has access to: Working memory Long term memory

Motor processor Carries out actions

The MHP model was used as the basis for the GOMS family of techniques proposed by Card, Moran, and Newell (1983), for quantitatively modeling and describing human task performance. GOMS stands for Goals, Operators, Methods, and Selection Rules.

Problems with the Model Human Processor approachIt models performance as a series of processing steps

Is that appropriate? It is too focused on one person, one task It is an overly simplistic view of human behavior

Ignores environment & other peopleBeyond the Model Human ProcessorMore recent research in cognitive frameworks has focused on:

How knowledge is represented How mental models are used in HCI How users learn and become experienced on systems How interface metaphors help to match user’s expectations (and how they don’t!) How a person’s mentally-held conceptual model affects behavior

Department Of Computer Science and Engineering Page 8

Page 10: HCI Notes

Human Computer Interaction

This represents a change in emphasis from human factors to human actors - a change in focus on humans from being passive and depersonalized to active and controlling.The person is considered as an autonomous agent able to coordinate and regulate behavior, not a passive element in a human machine system

Computational versus Connectivist ApproachesCognitive theories are classed as either computational or connectionist.The computational approach uses the computer as a metaphor for how the brain works, similar to the information processing models described above.The connectionist approach rejects the computer metaphor in favour of the brain metaphor, in which cognition is represent by neural networks. Cognitive processes are characterized as activation of nodes and connections between them.

Distributed CognitionDistributed cognition is a framework proposed by Hutchins (1991). Its basis is that to explain human behavior you have to look beyond the individual human and the individual task. The functional system is a collection of actors, technology, setting and the interrelations to one another. Examples of functional systems which have been studied include:• Ship navigation• Air traffic control• Computer programming teamsThe technique is used to analyze coordination of components in the functional system. It looks at• Information and how it propagates through the system• How it transforms between the different representational states found in the functional system. One property of distributed cognition that is often discovered through analysis is situation awareness (Norman, 1993) which is the silent and inter-subjective communication that is shared among a group. When a team is working closely together the members will monitor each other to keep abreast of what each member is doing. This monitoring is not explicit - rather the team members monitor each other through glancing and inadvertent overhearing. The two main concerns of distributed cognition are:

To map out how the different representational states are coordinated across time, location and objects

To analyze and explain breakdownsExample:An electricity power plant was redesigned so that the old system consisting of a single large display screen which could be seen by all of a team of three operators was replaced by individual workstation screens for operators. This worked well until there was a problem which resulted in dangerous gases being released. The team of operators had great difficulty in finding the source of the problem and deciding what to do.Because they no longer have access to all the information, they have to spend time explicitly coordinating their understanding of the situation by talking to each other. Under the old system, the knowledge would be shared – one operator would know what was happening with another’s area of the plant without explicit communication. Although

Department Of Computer Science and Engineering Page 9

Page 11: HCI Notes

Human Computer Interaction

the team’s individual responsibilities would still have been clearly divided, the knowledge of the system would be shared.How the new system of individual workstations could be modified to make better use of distributed cognition?

Department Of Computer Science and Engineering Page 10

Page 12: HCI Notes

Human Computer Interaction

Chapter 3

Perception and Representation

PerceptionAn understanding of the way humans perceive visual information is important in the design of visual displays in computer systems. Several competing theories have been proposed to explain the way we see. These can be split into two classes: constructivist and ecological.

Constructivist theorists believe that seeing is an active process in which our view is constructed from both information in the environment and from previously stored knowledge.Perception involves the intervention of representations and memories. What we see is not a replica or copy; rather a model that is constructed by the visual system through transforming, enhancing, distorting and discarding information.

Ecological theorists believe that perception is a process of ‘picking up” information from the environment, with no construction or elaboration needed. Users intentionally engage in activities that cause the necessary information to become apparent. We explore objects in the environment.

The Gestalt Laws of perceptual organization (Constructivist)Look at the following figure:What does it say? What do you notice about the middle letter of each word?You probably read this as ‘the cat’. You interpreted the middle letter in each word according to the context. Your prior knowledge of the world helped you make sense of ambiguous information.This is an example of the constructivist process. Gestalt psychology is a movement in experimental psychology that began just prior to World War I. It made important contributions to the study of visual perception and problem solving. The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather than separate component parts. The Gestalt psychologists were constructivists.The focal point of Gestalt theory is the idea of "grouping," or how we tend to interpret a visual field or problem in a certain way. The main factors that determine grouping are:

Proximity - how elements tend to be grouped together depending on their closeness.

Department Of Computer Science and Engineering Page 11

Page 13: HCI Notes

Human Computer Interaction

Similarity - how items that are similar in some way tend to be grouped together. Similarity can be shape, colour, etc.

Closure - how items are grouped together if they tend to complete a pattern.

Good continuation - we tend to assign objects to an entity that is defined by smooth lines or curves.

Department Of Computer Science and Engineering Page 12

Page 14: HCI Notes

Human Computer Interaction

\

Example in user interface design – proximity used to give structure in a form:

Affordances (Ecological)The ecological approach argues that perception is a direct process, in which information is simply detected rather than being constructed (Gibson, 1979).A central concept of the ecological approach is the idea of affordance (Norman, 1988). The possible behaviour of a system is the behaviour afforded by the system. A door affords opening, for example. A vertical scrollbar in a graphical user interface affords movement up or down. The affordance is a visual clue that suggests that an action is possible.When the affordance of an object is perceptually obvious, it is easy to know how to interact with it.Norman's first and ongoing example is that of a door. Some doors are difficult to see if they should be pushed or pulled. Other doors are obvious. The same is true of ring controls on a cooker. How do you turn on the right rear ring?"When simple things need labels or instructions, the design is bad."

Department Of Computer Science and Engineering Page 13

Page 15: HCI Notes

Human Computer Interaction

Affordances in SoftwareLook at these two possible designs for a vertical scroll bar. Both scrollbars afford movement up or down.What visual clues in design on the right make this affordance obvious?

Perceived Affordances in SoftwareThe following list suggests the actions afforded by common user interface controls:• Buttons are to push.• Scroll bars are to scroll.• Checkboxes are to check.• List boxes are to select from. etc.In some of these cases the affordances of GUI objects rely on prior knowledge or learning. We have learned that something that looks like a button on the screen is for clicking. A text box is for writing in, etc. For example, saying that a button on a screen affords clicking, whereas the rest of the screen does not, is inaccurate. You could actually click anywhere on the screen. We have learned that clicking on a button shaped area of the screen results in an action being taken.Link affordance in web sitesIt is important for web sites users to know what objects on the page can be clicked to follow links.This is known as link affordance. The following lists give some guidelines for improving link affordance:Text links• Do use blue underlined text for most links• Do use underlined headers as links• Do use words in a left-justified list as individual links• Do use bullets, arrows or some other indicator in front of certain text links• Do use "mouse-overs" appropriately and with care• Do use page location to help communicate that an item is clickable- Left or right margins- Top or bottom of the page• Do use the term "click here" when appropriateGraphical links• Do use meaningful words inside graphical links

Department Of Computer Science and Engineering Page 14

Page 16: HCI Notes

Human Computer Interaction

- Target locations (Home, Back to Top, Next- Common actions (Go, Login, Submit, Register)• Do use graphical "tabs" that look like real-world tabs• Do use graphical buttons that look like real-world pushbuttons• Do use clear, descriptive labels inside tabs and pushbuttons• Do make all company logos clickable (to the home page) Influence of theories of perception on HCIThe constructivist and ecological theorists fundamentally disagree on the nature of perception.However, interface and web designers should recognise that both theories can be useful in the design of interfaces:• The Gestalt laws can help in laying out interface components to make use of the context and prior knowledge of the user• paying careful attention to the affordances of objects ensures that the information required to use them can easily be detected by the user.

RepresentationA graphical user interface must represent information visually in a way which is meaningful to the user. The representations may be highly sophisticated, for example 3-dimensional simulated‘Walkthroughs’. To represent 3D objects on a 2D surface, perceptual depth cues are used:• Size• Interposition• Contrast, clarity and brightness• Shadow• Texture• Motion parallax

Graphical CodingVisual representations can also be used as a form of coding of information at the user interface.System processes, data objects and other features can be represented by different forms of graphical coding. Some codings are abstract, where there is no relation, other than established convention, for example:• Abstract codes to represent files• Abstract shapes to represent different objects• Reverse video to represent status• Colour to represent different optionsOther codings are more direct, for example:• Different icon sizes to reflect different file sizes• Different line widths to represent different tool widths in a drawing package• Bar charts to show trends in numerical dataThe most direct codings are icons which represent the objects they portray, for example:• The wastebin icon• A paper file to represent a file.

Department Of Computer Science and Engineering Page 15

Page 17: HCI Notes

Human Computer Interaction

Comparison of coding methods (Maguire, 1987)Coding method Maximum number of codes CommentsAlphanumerics Unlimited Highly versatile. Meaning can be self-evident. Location time may be longer than for graphic code.Shapes 10-20 Very effective if code matches object or operation representedColor 4-11 Attractive and efficient. Excessive use confusing. Limited value for the color-blind.Line angle 8-11 Good in special cases, for example, wind direction.Line length 3-4 Good. Can clutter display if many codes displayed.Line width 2-3 GoodLine style 5-9 GoodObject size 3-5 Fair. Can take up considerable space. Location time longer than for shape and color Brightness 2-4 Can be fatiguing, especially if screen contrast is poorBlink 2-4 Good for getting attention but should be suppressible afterwards. Annoying if overused. Limit to small fields. Reverse video No data Effective for making data stand out. If large area is in reverse video, flicker is more easily perceived. Underlining No data Useful but can reduce text legibility. Combination of codes Unlimited Can reinforce coding but complex combinations can be confusing Colour codingUse of colour improves effectiveness of• Recognition process;• Detection of patterns;• search (scanning);Usage:• Segmentation: color is a powerful way of dividing a display into separate regions.Areas/items belonging to each other should have the same color (note that this is also related to the Gestalt law of similarity);• Amount of color: too many will increase search time – colour pollution;• Task demands: most powerful in search tasks less powerful in tasks requiring categorization/memorization;• Experience of users: more valuable to novice than to experts;

Colour theoryColoured screens are the primary sensory stimulus that software produces, and poor colour choices can significantly reduce the usability of GUI applications or web sites. Colour can affect readability and recognition as described above, and it can also affect the user’s overall impression of an interface. An application which uses clashing or discordant colours will often provoke a negative reaction in users, who will not enjoy using it. Good use of colour can be powerful in any application, but is particularly important in web pages.Choice of harmonious colours can be helped by a basic understanding of colour theory. The main tool for working with colours is the simple colour wheel shown here. (You are best to look at these notes online to see them in colour!)

Department Of Computer Science and Engineering Page 16

Page 18: HCI Notes

Human Computer Interaction

The black triangle in the centre points out the primary colours. If you mix two primary colours, you will get the secondary colour that's pointed out by the lighter grey triangle. When you mix a primary with either of its closest secondary colours, you get a tertiary colour; these are located between the points of the black and grey triangles.A harmonious set of colours for an interface is known as a colour scheme. Colour schemes are based on the colour wheel. There are three main sets of colour schemes: analogous, complimentary, and monochromatic. These are illustrated using an application calledColorWheel Pro, which is designed to allow colour schemes to be created and previewed. Each scheme is illustrated by a colour wheel showing the range of selected colours, and the scheme applied to a logo.AnalogousAnalogous colours are those that are adjacent to each other on the colour wheel. If you pick any range of colours between two points of either triangle on our colour wheel (i.e. yellow to red, orange to violet, red to blue, etc), you will have an analogous colour scheme

Example of an analogous scheme - http://www.zeldman.comComplementaryComplementary colour schemes consist of colours that are located opposite each other on the colour wheel, such as green and red, yellow and violet, or orange and blue. These colours are said to complement one another. When placed next to each other, a

Department Of Computer Science and Engineering Page 17

Page 19: HCI Notes

Human Computer Interaction

phenomenon known as simultaneous contrast occurs, wherein each colour makes the other look more vibrant.

There are two pitfalls possible problems with complimentary schemes:• If you place complementary colors on top of one another, this creates an illusion of movement. This is particularly bad for text.• Colours like cyan and red, which are not quite directly across from each other, yet are not close enough to be analogous, will clash rather than complement. These colors are known as discordants.Complementary colour schemes are often more complex than simply using two colours from opposite sides of the colour wheel. For example, a split complimentary scheme is one in which a colour is paired with the two colours adjacent to its compliment.Example - http://www.ufl.edu

MonochromaticIf you mix white with a pure colour, you produce tints of that colour. If you mix black with a pure colour, you get shades of that colour. If you create an image using only the tints and shades of one colour you have a monochromatic colour scheme.Example - http://www.yakima.com

Department Of Computer Science and Engineering Page 18

Page 20: HCI Notes

Human Computer Interaction

Department Of Computer Science and Engineering Page 19

Page 21: HCI Notes

Human Computer Interaction

Chapter 4 Attention and Memory

AttentionThe human brain is limited in capacity. It is important to design user interfaces which take into account the attention and memory constraints of the users. This means that we should design meaningful and memorable interfaces. Interfaces should be structured to be attention-grabbing and require minimal effort to learn and remember. The user should be able to deal with information and not get overloaded.Our ability to attend to one event from what seems like a mass of competing stimuli has been described psychologically as focused attention. The "cocktail party effect" -- the ability to focus one's listening attention on a single talker among a cacophony of conversations and background noise---has been recognized for some time. We know from psychology that attention can be focused on one stream of information (e.g. what someone is saying) or divided (e.g. focused both on what someone is saying and what someone else is doing). We also know that attention can be voluntary (we are in an attentive state already) or involuntary (attention is grabbed). Careful consideration of these different states of attention can help designers to identify situations where a user’s attention may be overstretched, and therefore needs extra prompts or error protection, and to devise appropriate attention attracting techniques. Sensory processes, vision in particular, are disproportionately sensitive to change and movement in the environment. Interface designers can exploit this by, say, relying on animation of an otherwise unobtrusive icon to indicate an attention-worthy event.

Focusing attention at the interface

Techniques, which can be used to guide the user’s attention, include:

Structure – grouping, based on the Gestalt laws Spatial and temporal cues – where things are positioned or when they appear Colour coding, as described in the previous chapter Alerting techniques, including animation or sound

Important information should be displayed in a prominent place to catch the user’s eyeLess important information can be relegated to the background in specific areas – the usershould know where to look Information not often requested should not be on the screen, but should be accessible when needed Note that the concepts of attention and perception are closely related.

Multitasking and Interruptions

In a work environment using computers, people are often subject to being interrupted, for example by a message or email arriving. In addition, it is common for people to be multitasking -carrying out a number of tasks during the same period of time by alternating between them. This is much more common than performing and completing tasks one after another.

Department Of Computer Science and Engineering Page 20

Page 22: HCI Notes

Human Computer Interaction

In complex environments, users may be performing one primary task, which is the most important at that time, and also one or more less important secondary tasks. For example, a pilot’s tasks include attending to air traffic control communications, monitoring flight instruments, dealing with system malfunctions, which may arise, and so on. At any time, one of these will be the primary task, which is said to be fore grounded, while other activities are shortly suspended.People are in general good at multitasking but are often prone to distraction. On returning to an activity, they may have forgotten where they left off. People often develop their own strategies, to help them remember what actions they need to perform when they return to an activity.Such external representations, or cognitive aids (Norman, 1992), may include writing lists or notes, or even tying a knot in a handkerchief.Cognitive aids have applications in HCI, where the system can be designed to provided them –

The system should inform user where he was The system should remind user of common tasks

For example, Amazon’s check out procedure displays a list of steps involved in the process, and indicates what step has been reached.

Automatic ProcessingMany activities are repeated so often that they become automatic – we do them without any need to think. Examples include riding a bike, writing, typing, and so on. Automatic cognitive processes are:

• fast • demanding minimal attention • unavailable to consciousness

The classic example used to illustrate the nature of an automatic operation is the Stroop effect.To experiment with this, look at the colour sheet at the end of this chapter.This experiment demonstrates interference. The interference between the different information (what the words say and the colour of the words) your brain receives causes a problem. There are two theories that may explain the Stroop effect:

• Speed of Processing Theory: the interference occurs because words are read faster than colors are named.

• Selective Attention Theory: the interference occurs because naming colors requires more attention than reading words.

If a process is not automatic, it is known as a controlled process.Automatic processes

Are not affected by limited capacity of brain Do not require attention Are difficult to change once they have been learned

Controlled Processes Are non-automatic processes Have limited capacity Require attention and conscious control (Shiffrin & Shneider, 1977)

Department Of Computer Science and Engineering Page 21

Page 23: HCI Notes

Human Computer Interaction

Memory ConstraintsThe human memory system is very versatile, but it is by no means infallible. We find some things easy to remember, while other things can be difficult to remember. The same is true when we try to remember how to interact with a computer system. Some operations are simple to remember while others take a long time to learn and are quickly forgotten.An understanding of human memory can be helpful in designing interfaces that people will find easy to remember how to use.Levels of Processing TheoryThe extent to which things can be remembered depends on its meaningfulness. In psychology, the levels of processing theory (Craik and Lockhart , 1972) has been developed to account for this. This says that information can be processed at different levels, from a shallow analysis of a stimulus (for example the sound of a word) to a deep or semantic analysis. The meaningfulness of an item determines the depth of the processing – the more meaningful an item the deeper the level of processing and the more likely it is to be remembered.Meaningful InterfacesThis suggests that computer interfaces should be designed to be meaningful. This applies both to interfaces which use commands and interfaces which use icons or graphical representations for actions. In either case, the factors which determine the meaningfulness are:

Context in which the command or icon is used The task it is being used for The form of the representation The underlying concept

Meaningfulness of CommandsThe following guidelines are examples taken from a larger set which was compiled to suggest how to ensure that commands are meaningful (Booth 1994, Helander, 1988):

Syntax and commands should be kept simple and natural The number of commands in a system should be limited and in a limited format Consider the user context and knowledge when choosing command names. Choose meaningful command names. Words familiar to the user The system should recognize synonymous and alternative forms of command

syntax Allow the users to create their own names for commands

Sometimes a command name may be a word familiar to the user in a different context. For example, the word ‘CUT’ to a computer novice will mean to sever with a sharp instrument, rather than to remove from a document and store for future use. This can make the CUT command initially confusing

Meaningfulness of IconsIcons can be used for a wide range of functions in interfaces, for example

• Labeling e.g. toolbar item, web page link

Department Of Computer Science and Engineering Page 22

Page 24: HCI Notes

Human Computer Interaction

• Warning e.g. error message

• Identifying e.g. file types, applications

• Manipulating e.g. tools for drawing, zooming

• Container e.g. wastebasket, folder

The extent to which the meaning of an icon is understood depends on how it is represented.Representational form of icons can be classified as follows:Resemblance icons – depict the underlying concept through an analogous image.

The road sign for "falling rocks" presents a clear resemblance of the roadside hazard.

This represents the Windows calculator application, and resembles a calculator Exemplar icons – serve as a typical example

a knife and fork used in a public information sign to represent "restaurant services". The image shows the most basic attribute of what is done in a restaurant i.e. eating.

Department Of Computer Science and Engineering Page 23

Page 25: HCI Notes

Human Computer Interaction

this represents Microsoft Outlook – the clock and letter are examples of the tasks this application does (calendar and email tasks) Symbolic icons – convey the meaning at a higher level of abstraction

the picture of a wine glass with a fracture conveys the concept of fragility

this represents a connection to the internet – the globe conveys the concept of the internetArbitrary icons – bear no relation to the underlying concept

the bio-hazard sign consists of three partially overlaid circles

this represents a software design application called Enterprise Architect. There is no obvious meaning in the icon to tell you what task you can do with the applicationNote that arbitrary icons should not be regarded as poor designs, even though they must be learned. Such symbols may be chosen to be as unique and/or compact such as a red no entry sign with a white horizontal bar, designed to avoid dangerous misinterpretation.

Combination IconsIcons are often favoured as an alternative to commands. It is common for users who use a system infrequently to forget commands, while they are less likely to forget icons once learnt. However, the meaning of icons can sometimes be confusing, and it is now quite common to use a redundant form of representation where the icons are displayed together with the command names.

Department Of Computer Science and Engineering Page 24

Page 26: HCI Notes

Human Computer Interaction

The disadvantage of this approach is that it takes up more screen space. This can be reduced by using pop-up tool tips to provide the text.

Icons in Web PagesThe use of graphical representation in web pages tends to be quite different to that in other interfaces. Most user actions (links) are usually represented by text (although the text may actually be an image).

There is often isolated and specific use of icons and graphical representations for links. The following examples are from amazon.co.uk.• Buttons to submit forms, e.g. search boxes:

• Images of items - the user can click to get more information on the item:

• Links to specific site features, e.g. shopping basket:

Icon use in web pages is sparing for a number of reasons, for example:• Pages often convey information and branding graphically, so it would be difficult to focus attention on icons among other graphical content.• Graphical links are often banners to focus attention to a small number of specific items• The web browser has its own set of iconsInterference: The Stroop Effect Don't read the words below—just say the colours they're printed in, and do this aloud as fast as you can.

Department Of Computer Science and Engineering Page 25

Page 27: HCI Notes

Human Computer Interaction

If you're like most people, your first inclination was to read the words, 'red, yellow, green...,' rather than the colours they're printed in, 'blue, green, red...' You've just experienced interference.When you look at one of the words, you see both its colour and its meaning. If those two pieces of evidence are in conflict, you have to make a choice. Because experience has taught you that word meaning is more important than ink colour, interference occurs when you try to pay attention only to the ink colour.The interference effect suggests you're not always in complete control of what you pay attention to.

Department Of Computer Science and Engineering Page 26

Page 28: HCI Notes

Human Computer Interaction

Chapter 5 Knowledge and MentalIntroductionBy discovering what users know about systems and how they reason about how the systems function, it may be possible to predict learning time, likely errors and the relative ease with which users can perform their tasks. We can also design interfaces which support the acquisition of appropriate user mental models. "In interacting with the environment, with others, and with the artefacts of technology, people form internal, mental models of themselves and of the things with which they are interacting. These models provide predictive and explanatory power for understanding the interaction."- Donald Norman (1993)Mental models are representations in the mind of real or imaginary situations. Conceptually, the mind constructs a small scale model of reality and uses it to reason, to underlie explanations and to anticipate events.

Knowledge RepresentationKnowledge is represented in memory as:• Analogical representations: picture-like images, e.g. a person’s face• Propositional representations: language-like statements, e.g. a car has four wheelsConnectionist theorists believe that analogical and propositional representations are complementary, and that we use networks of nodes where the knowledge is contained in the connections between the nodes.A connectionist network for storing information about people is shown below:

Department Of Computer Science and Engineering Page 27

Page 29: HCI Notes

Human Computer Interaction

Knowledge in the Head vs. Knowledge in the WorldPsychologist and cognitive scientist Donald A. Norman published a book titled The Psychology of Everyday Things. In it he reviews the factors that affect our ability to use the items we encounter in our everyday lives. He relates amusing but pointed stories of people’s attempts to use VCRs, computers, slide projectors, telephones, refrigerator controls, etc. In his book Norman distinguishes between the elements of good design for items that are encountered infrequently or used only occasionally and those with which the individual becomes intimately familiar through constant use. Items encountered infrequently need to be obvious. An example used by Norman is the common swinging door. The individual intending to pass through the door needs to know whether to push the door or pull it. We all have experienced doors where it was not obvious what to do, or two doors appeared the same, but only one would swing.However, when a door is well designed, it is obvious whether one is to push or pull. Norman refers to knowledge of how to use such items as being in the world.However, when one uses something frequently, efficiency and speed are important, and the knowledge of how to use it needs to reside in the head. Most people can relate to the common typewriter or computer keyboard. When one uses it infrequently and has never learned to type, the knowledge of which key produces which character on the screen comes from visually scanning the keyboard and finding the key with the desired character. The knowledge comes from the world. However, people who frequently use a computer learn to touch type, transferring the knowledge to the head. Their efficiency and speed far exceed that of the hunt-and-peck typist.Norman describes the trade-off between knowledge in the world and knowledge in the head as follows:

Property Knowledge in the World Knowledge in the Head

Learning Learning not required. Interpretation substitutes for learning. How easy it is to interpret information is the world depends upon how well it exploits natural mappings and constraints.

Requires learning, which can be considerable. Learning is made easier if there is meaning of structure to the material (or if there is a good mental model).

Efficiency of use

Tends to be slowed up by the need to find and interpret the external information.

Can be very efficient

Ease of use High Low at first encounter

Property Knowledge in the World Knowledge in the HeaKnowledge Organisation – Schemata and Scripts

Department Of Computer Science and Engineering Page 28

Page 30: HCI Notes

Human Computer Interaction

One of the main characteristics of knowledge is that it is highly organised. We can often retrieve information very rapidly. For example, see how quickly you can retrieve the answers to the following queries:What is the capital of Italy?Name a model of car manufactured by Ford.How many days are there in a year?You (probably) answered these very quickly, which suggests that the knowledge is organised in some way.The connectionist network is one theory for how this organisation happens. Another theory is that knowledge consists of numerous schemata. A schema is network of general knowledge based on previous experience. It enables us to behave appropriately in different situations.For example, suppose you overheard the following conversation between two friends:A: “Did you order it?”B: “Yeah, it will be here in about 45 minutes.”A: “Oh... Well, I've got to leave before then. But save me a couple of slices, okay?And a beer or two to wash them down with?”Do you know what they are talking about? You’re probably pretty sure they are discussing a pizza they have ordered. But how can you know this? You've never heard this exact conversation, so you're not recalling it from memory. And none of the defining qualities of pizza are represented here, except that it is usually served in slices, which is also true of many other things.Schema theory would suggest that we understand this because we have activated our schema for pizza (or perhaps our schema for "ordering pizza for delivery") and used that schema to comprehend this scenario.A script is a special subcase of a schema which describes a characteristic scenario of behaviour in a particular setting. Schank and Abelson (1977) described the following script for eating at a restaurant:

Component Specific action

Entering: Walk into restaurant Look for table Decide where to site Go to table Sit down

Ordering: Get menu Look at menu Choose food Waiter arrives Give order to waiter Wait, talk Cook prepares food

Eating: Cook gives food to waiter Waiter gives food to customer Customer eats

Department Of Computer Science and Engineering Page 29

Page 31: HCI Notes

Human Computer Interaction

Talk

Leaving: Waiter prints bill Waiter delivers bill to customer Customer examines bill Calculate tip Leave tip Gather belongings Pay bill Leave restaurant

People develop a script by repeatedly carrying out a set of actions for a given setting. Specific actions are slotted into knowledge components.Schemata can guide users’ behaviour when interacting with computers. As they learn to use a system they may develop scripts for specific tasks, such as creating documents, saving documents, and so on. These scripts may be useful in guiding behaviour when faced with new systems.In older computer systems, as illustrated in the figure below, there were few universally accepted conventions for performing common tasks. Printing a file in one word processor tended to require a different set of commands or menu options from another word processor. There is much more commonality between different applications now – to save a file in pretty much any Windows application you can go to the File menu which will be the first one in the menu bar, and selectSave or Save as….However, it is still important for interface designers to concentrate on ensuring that as far a possible their systems make use of the users’ “How to use a computer” or “How to use a web site” schemata.

The WordStar word processor – how do you save a file with this?

Mental ModelsA major criticism of schema-based theories is their inflexibility. We are also good at coping with situations where our scripts are inappropriate. We can adapt to predict states and comprehend situations we have never personally experienced

Department Of Computer Science and Engineering Page 30

Page 32: HCI Notes

Actual System Behavior

System Image

As implemented

As observed

Should match as closely as possible

Designer,s Conceptual Model

User’s Mental Model

Human Computer Interaction

The theory of mental models has been developed to account for these dynamic aspects of cognitive activity. Mental models are related to schemata – models are assumed to be dynamically constructed by activating stored schemata. We construct a mental model at the time to deal with a particular situation.The term “mental model” was developed in the 1940’s by Craik, who said:“If the organism carries a ‘small-scale model’ of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilise the knowledge of past events in dealing with the present and future, and in every way react to it in a much fuller, safer and more competent manner to emergencies which face it.”A key phrase here is to “try out various alternatives”. When an architect is designing a building, architect’s models allow alternative design ideas to be tested without actually constructing a real building. Similarly, a mental model allows a person to test behaviour or reaction to a situation to be tested before taking any action. This is called running the mental model.The danger in using mental models is that a person’s model of a situation may not be accurate.Example: You want to heat up a room as quickly as possible, so you turn up the thermostat. Will this work?Users will develop mental models of a computer system. It is important for interface designers to ensure that their systems encourage users to develop an appropriate mental model. The system image (the view of the system seen by the user) should guide the user’s mental model towards the designer’s conceptual model (the way the designer views the system).

If the users’ mental model of how the system works is not accurate, then they may find the system difficult to learn, or ‘unfriendly’.Example A Windows user exposed to a Unix environment for the first time. He has to type a document in the Emacs editor as opposed to Word. The user makes a typo and without

Department Of Computer Science and Engineering Page 31

Page 33: HCI Notes

Human Computer Interaction

hesitating presses his fingers on the Control and the Z buttons since these are the keys he always used as a keyboard shortcut for UNDO command. The user gets frustrated as the Emacs editor completely disappears from the screen and is back to the Unix prompt with no error message.The fact that the user has been working on Windows builds a mental model for theUNDO command in almost all windows programs and associates this model with the action of pressing CTRL-Z, not knowing that these actions will cause a completely different action in Unix environment.

Structural and Functional modelsTwo main types of mental model were identified in the 1980s:Structural models define facts the user has about how a system works. Knowledge of how a device or system works can predict the effect of any possible sequence of actions.Functional models, also known as task-action mapping models, are procedural knowledge about how to use the system. The main advantage of functional models is that they can be constructed from existing knowledge about a similar domain or system.Structural models can answer unexpected questions and make predictions, while functional models are based round a fixed set of tasks. Most of the time users will tend to apply functional models as they are usually easier to construct.

Example: consider changing gear in a car. Think about how to do it and how you decide which gear to select. Think about constructing a structural model to capture the features of how it works. Then do the same with a functional model. Which model do you find more difficult to construct?

Bridging the GulfsClosely related to mental models is the idea of gulfs between the interface of a system and its users. The figure below shows the cycle of interaction between a user and a system:

Department Of Computer Science and Engineering Page 32

Gulf of evaluation

Evaluate and interprete the display

Formulate goals and actions generate output

Human computer Interface

Update Display

Update System

System

Page 34: HCI Notes

Human Computer Interaction

The Gulf of Evaluation is the amount of effort a user must exert to interpret the physical state of the system and how well their expectations and intentions have been met.• Users can bridge this gulf by changing their interpretation of the system image, or changing their mental model of the system.• Designers can bridge this gulf by changing the system image.The Gulf of Execution is the difference between the user’s goals and what the system allows them to do – it describes how directly their actions can be accomplished.• Users can bridge this gulf by changing the way they think and carry out the task toward the way the system requires it to be done• Designers can bridge this gulf by designing the input characteristics to match the users’ psychological capabilities.Design considerations:Systems should be designed to help users form the correct productive mental models. Common design methods include the following factors:• Affordance: Clues provided by certain objects properties on how this object will be used and manipulated.• Simplicity: Frequently accessed function should be easily accessible. A simple interface should be simple and transparent enough for the user to concentrate on the actual task in hand.• Familiarity: As mental models are built upon prior knowledge, it's important to use this fact in designing a system. Relying on the familiarity of a user with an old, frequently used system gains user trust and help accomplishing a large number of tasks. Metaphors in user interface design are an example of applying the familiarity factor within the system.• Availability: Since recognition is always better than recall, an efficient interface should always provide cues and visual elements to relieve the user from the memory load necessary to recall the functionality of the system.• Flexibility: The user should be able to use any object, in any sequence, at any time.• Feedback: Complete and continuous feedback from the system through the course of action of the user. Fast feedback helps assessing the correctness of the sequence of actions.We will look in more detail later on at techniques for interaction design taking these factors into consideration.

Department Of Computer Science and Engineering Page 33

Page 35: HCI Notes

Human Computer Interaction

Chapter 6

Interface MetaphorsMetaphors convey an abstract concept in a more familiar and accessible form. A metaphor is a figure of speech in which an expression is used to refer to something that it does not literally denote in order to suggest a similarity A widely quoted example can be found in Shakespeare's As You Like It:” All the world's a stage..."Metaphors are widely used to make use of users’ existing knowledge when learning new computer systems.

Verbal MetaphorsVerbal metaphors are useful tools to help users to understand a new system. For example, people using a word processor for the first time consider it similar to a typewriter. This perceived similarity activates the user’s ‘typewriter’ schema, allowing them to interpret and predict the behaviour of the word processor, for example:

These links provide basic foundation from which users develop their mental models. Knowledge about a familiar domain (typewriter) in terms of elements and their relation to each other is mapped onto elements and their relations in the unfamiliar domain (computer).• Elements: keyboard, spacebar, return key• Relations: hit only one character key at a time, hitting a character key will result in a letter being displayed on a visible medium By drawing on prior knowledge a learner can develop an understanding of the new domain more readily.

Computer has a QWERTY keyboard Typewriter has a QWERTY keyboard

Keys should have same effect as they do on a typewriter AND dissimilarities can cause problems for learners. For example, the backspace key on a typewriter moves the carriage back, while on a word processor it usually deletes a character. However, as they become aware of the discrepancies, they can develop a new mental model.

Advance organizersVerbal metaphors provided in advance can aid learning. For example, Foss (1982) studied the effect of describing a metaphor for a system to new users before they start learning. This is called an advance organiser. In this case file creation and storage were

Department Of Computer Science and Engineering Page 34

Page 36: HCI Notes

Human Computer Interaction

explained in terms of a filing cabinet metaphor, and the result was that these users performed better when they actually used the system.

Virtual Interface MetaphorsVerbal metaphors try to explain the use of a computer system in terms of something that it resembles. Xerox in the late 1970’s realised the potential of deliberately designing interfaces to be more like the real world.Instead of using verbal metaphors to help understand the interface, they went further and designed an interface metaphor for the Star system– an interface design based on a real office. Office objects (paper, folders, and filing cabinets, in and out trays) were represented as icons. The overall organising metaphor on the screen was the desktop, resembling the top of a real office desk.The ideas of the Star system evolved first into the Apple Macintosh interface, which then influenced the Windows interface. We still have desktops on our PCs and Macs.• Verbal metaphor – parts of a computer system are like a typewriter• Interface metaphor – the desktop is like an office desktop, but it also is the interfaceInstead of using a verbal metaphor as a basis from which to develop a mental model, the interface metaphor is the model that is learned. This tends to lead to the development of functional mental models of the system.

Using the Star interfaceFiles were transformed from abstract entities with name to pictorial representations, or icons. The pictorial representation would let users know how to interact with them – for example a file or folder can be opened, or placed in the trash. The mouse was developed to allow electronic actions to be performed analogous to physical actions:

Physical Electronic

Placing hand on object Clicking

Picking up an object Selecting

Moving an object Dragging

Evolution of the desktopThe following screenshots show the evolution of the virtual interface metaphor from the Star to the present day. Note how the desktop metaphor has become generally less literal, and new elements have been introduced (the Start button, the My Computer icon, and so on). Why do you think this has happened?

Department Of Computer Science and Engineering Page 35

Page 37: HCI Notes

Human Computer Interaction

Department Of Computer Science and Engineering Page 36

Page 38: HCI Notes

Human Computer Interaction

Department Of Computer Science and Engineering Page 37

Page 39: HCI Notes

Human Computer Interaction

Composite metaphorsA problem with the ‘metaphor as model’ approach is the difficulty of introducing new functionality which does not fit into the interface metaphor. Designers have got round this by introducing composite metaphors. These allow the desktop metaphor to include objects which do not exist in the physical office, for example:• Menus• Windows• scroll bars (these make use of the concept of unrolling a scroll, or rolled-up document)It might be assumed that users may have difficulty with composite metaphors. In general it has been found that people can deal with them rather well and can develop multiple mental models. Some composite mental models can cause confusion. For example, on a Macintosh, you can eject a disk by dragging it to the trash – you retrieve it by throwing it away.

Interface metaphors for applicationsThe desktop metaphor has been used successfully for operating systems. However, other metaphors have been developed for specific types of applications, for example:Operating system

Metaphor: The desktop Familiar knowledge: office tasks, files, documents

Data Storage Metaphor: Filing system Familiar knowledge: files, folders, storing files, retrieving files

Spreadsheets Metaphor: ledger sheet Familiar knowledge: columnar tables, calculations

Department Of Computer Science and Engineering Page 38

Page 40: HCI Notes

Human Computer Interaction

The Web Metaphor: Travel Familiar knowledge: going from place to place

Online shopping Metaphor: Shopping cart Familiar knowledge: adding items, checking out

Graphics packages Metaphor: Toolbox Familiar knowledge: paint, brushes, pencils, rubbers

Media Players Metaphor: Tape/CD player Familiar knowledge: play, stop, fast-forward/rewind buttons

Multimedia environments and web sites Metaphor: Rooms (each associated with a different task) Familiar knowledge: interiors of buildings

Some researchers have experimented with virtual reality environments to interact with applications such as databases. This figure below shows a database query represented as opening a drawer in a room where data on a specific topic is stored. Do you think this type of metaphor is helpful?

Department Of Computer Science and Engineering Page 39

Page 41: HCI Notes

Human Computer Interaction

Similarly, some early web sites used elaborate metaphors to make their web sites closely resemble the real world. These were not generally considered to be successful. Compare the Southwest Airlines home page from 1997 and its replacement from 1999. Why do you think the original metaphor was abandoned?

Metaphors in user interface elementsThe examples above show the use of metaphors to assist users to develop a mental model for an application, or a major part of an application. Within applications and web pages, metaphors can also be used for specific interface elements. These help users understand how to interact with those elements. For example:Tabs

Metaphor: Card file Commonly used in web pages and dialogue boxes.

Department Of Computer Science and Engineering Page 40

Page 42: HCI Notes

Human Computer Interaction

Progress BarsMetaphor: progress is to the right (orientation metaphor). Based on direction of reading – highly cultural, as some cultures read from right to left.IconsSymbolic icons use metaphors to convey their meaning – e.g. a globe to represent the World. Wide Web or the magnifying glass icon in a photo manipulation program to represent zooming in on an image.

Pervasive computingPervasive computing, also known as ubiquitous computing is the trend towards increasingly ubiquitous connected computing devices in the environment, a trend being brought about by a convergence of advanced electronic - and particularly, wireless - technologies and the Internet. Pervasive computing devices are not personal computers as we tend to think of them, but very tiny - even invisible - devices, either mobile or embedded in almost any type of object imaginable, including cars, tools, appliances, clothing and various consumer goods - all communicating through increasingly interconnected networks. In these devices, the computer interface moves away from the desktop and the interface

Metaphor is invisible to the user.The goal of researchers is to create a system that is pervasively and unobtrusively embedded in the environment, completely connected, intuitive, effortlessly portable, and constantly available. Among the emerging technologies expected to prevail in the pervasive computing environment of the future are wearable computers, smart homes and smart buildings. A number of leading technological organizations are exploring pervasive computing. Xerox's. Palo Alto Research Center (PARC), for example, has been working on pervasive computing applications since the 1980s. IBM's project Planet Blue, for example, is largely focused on finding ways to integrate existing technologies with a wireless infrastructure. Carnegie Mellon University's Human Computer Interaction Institute (HCII) is working on similar research in their Project Aura. The Massachusetts Institute of Technology (MIT) has a project called Oxygen. MIT named their project after that substance because they envision a future of ubiquitous computing devices as freely available and easily accessible as oxygen is today.

Department Of Computer Science and Engineering Page 41

Page 43: HCI Notes

Human Computer Interaction

Department Of Computer Science and Engineering Page 42

Page 44: HCI Notes

Human Computer Interaction

Chapter 7InputInput DevicesInput is concerned with:

recording and entering data into a computer system issuing instructions to the computer

An input device is” “a device that, together with appropriate software, transforms information from the user into data that a computer application can process”The choice of input device for a computer system should contribute positively to the usability of the system. In general, the most appropriate device is one which:• Matches the users

Physiological characteristics Psychological characteristics Training and expertise

• Is appropriate for the tasks e.g. continuous movement, discrete movement

• Is suitable for work environment e.g. speech input is not appropriate for noisy workplace

Many systems use two or more complementary input devices together, such as a keyboard and a mouse.There should also be appropriate system feedback to guide, reassure and, if necessary, correct users’ errors, for example:

On screen - text appearing, cursor moves across screen Auditory – alarm, sound of mouse button clicking Tactile – feel of button being pressed, change in pressure

KeyboardsA keyboard is:

A group of on-off push-buttons A discrete entry device

Issues:• Physical design of keys

size feedback robustness

• Grouping and layout of keys QWERTY typewriter layout is most common but others are possible

Types of keyboard

QWERTY

Standard alphanumeric keyboard designed for typewriters. The key arrangement was chosen to reduce the incidence of keys jamming in mechanical typewriters.

Department Of Computer Science and Engineering Page 43

Page 45: HCI Notes

Human Computer Interaction

Some handheld computers have very small QWERTY keyboards

Dvorak

The Dvorak keyboard is a typewriter key arrangement that was designed to be easier to learn and use than the standard QWERTY keyboard. The Dvorak keyboard was designed from the typist's point-of-view - with the most common consonants on one side of the middle or home row and the vowels on the other side so that typing tends to alternate key strokes back and forth between hands. The Dvorak approach is said to lead to faster typing. It was named after its inventor, Dr. August Dvorak. Dr. Dvorak also invented systems for people with only one hand.Both Windows and Macintosh operating systems provide ways for the user to tell the system that they are using a Dvorak keyboard. Although the QWERTY system seems too entrenched to be replaced by the Dvorak system, some keyboard users will prefer the more ergonomic arrangement of the Dvorak system.

Department Of Computer Science and Engineering Page 44

Page 46: HCI Notes

Human Computer Interaction

Dvorak and single-handed keyboards

Chord KeyboardsChord keyboards are smaller and have fewer keys, typically one for each finger and possibly the thumbs. Instead of the usual sequential, one-at-a-time key presses, chording requires simultaneous key presses for each character typed, similar to playing a musical chord on a piano.

The primary advantage of the chording keyboard is that it requires far fewer keys than a conventional keyboard. For example, with five keys there are 31 chord combinations that may represent letters, numbers, words, commands, or other strings. With fewer keys, finger travel is minimized because the fingers always remain on the same keys. In addition, the user is free to place the keyboard wherever it is convenient and may avoid the unnatural keying posture associated with a conventional keyboard.The most significant disadvantage of the chording keyboard is that it cannot be used by an untrained person. At least 15 hours of training and practice are necessary to learn the chord patterns that represent individual letters and numbers. A second disadvantage of the chording keyboard is that data entry rates (characters per unit of time) are actually slower than data entry rates for conventional keyboards. Due to the increased learning time and slower performance, chording keyboards have not become commercially viable except for specialized applications.

Department Of Computer Science and Engineering Page 45

Page 47: HCI Notes

Human Computer Interaction

The Twiddler and The Bat

Dedicated buttons

Some computer systems have custom-designed interfaces with dedicated keys or buttons for specific tasks. These can be useful when there is a very limited range of possible inputs to the system and where the environment is not suitable for an ordinary keyboard. In-car satellite navigation systems and gamepads for computer games are good examples.

Pointing Devices• These can be used to specify a point or a path.• Pointing devices are usually continuous entry devices.

Cursor controls

Two dimensional devices which can move a cursor and drag objects on the screen.

MiceCan move around on a flat surface. Mice are not convenient in limited spaces.

Department Of Computer Science and Engineering Page 46

Page 48: HCI Notes

Human Computer Interaction

Presentation miceHandheld devices, usually wireless, do same job as an ordinary mouse but do not need asurface.TrackballsBall rotates in fixed socket. Some people find this easier to use than a mouse.TouchpadsUsually found on laptop computers, but can also be used as separate devices. Work liketrackballs but without moving parts.

JoysticksUsed when user needs to input direction and speed. Other devices are used to indicate position. To see the difference, consider playing a flight simulation game with a mouse as your input device. Why would this be difficult?Cursor KeysCursor keys can be used to move a cursor, but it is difficult to accomplish dragging. Using keys can provide precise control of movement by moving in discrete steps, for example when moving a selected object in a drawing program. Some handheld computers have a single cursor button which can be pressed in any of four directions.

Touch screensTouch displays allow the user to input information into the computer simply by touching an appropriate part of the screen. This kind of screen is bi-directional – it both receives input and it outputs information.Advantages:

Easy to learn – ideal for an environment where use by a particular user may only occur

once or twice Require no extra workspace No moving parts (durable) Provide very direct interaction

Disadvantages:

Department Of Computer Science and Engineering Page 47

Page 49: HCI Notes

Human Computer Interaction

Lack of precision High error rates Arm fatigue Screen smudging

Touch screens are used mainly for: Kiosk devices in public places, for example for tourist information Handheld computers

Pen InputTouchscreens designed to work with pen devices rather than fingers have become very common in recent years. Pen input allows more precise control, and, with handwriting recognition software, also allows text to be input. Handwriting recognition can work with ordinary handwriting or with purpose designed alphabets such as Graffiti.Pen input is used in handheld computers (PDAs) and specialised devices, and more recently in tablet PCs, which are similar to notebook computers, running a full version of the Windows operating system, but with a pen-sensitive screen and with operating system and applications modified to take advantage of the pen input.Pen input is also used in graphics tablets, which are designed to provide precise control for computer artists and graphic designers.

A Palm PDA and the Graffiti alphabet

Department Of Computer Science and Engineering Page 48

Page 50: HCI Notes

Human Computer Interaction

A tablet PC

Using a graphics tablet

3D input devicesAll the pointing devices described above allow input and manipulation in two dimensions. Some applications require input in three dimensions, and specialised input devices have been developed for these.3D trackers3D trackers are often used to interact with Virtual Reality environments.Stationary Controllers (Small range of motion)

Best for precise 3D element manipulationMotion Trackers (Large range of motion)

Best for 3D region pointing or head trackingVirtual Reality Gloves (Datagloves)

Hand gesturesHead Mounted Displays - HMDs (Tracker+Displays)

Best for 3D scene navigation/exploration

Department Of Computer Science and Engineering Page 49

Page 51: HCI Notes

Human Computer Interaction

HMD and dataglove3D miceThese allow movement in more than two dimensions, and are often used together with an ordinary mouse. For example, this allows a designer to simultaneously pan, zoom and rotate 3D models or scenes with the controller in one hand while the other hand selects, inspects or edits with the mouse.

The Spacemouse

Speech input

Department Of Computer Science and Engineering Page 50

Page 52: HCI Notes

Human Computer Interaction

Speech or voice recognition is the ability of a machine or program to recognize and carry out voice commands or take dictation. In general, speech recognition involves the ability to match a voice pattern against a provided or acquired vocabulary. Usually, a limited vocabulary is provided with a product and the user can record additional words. More sophisticated software has the ability to accept natural speech (meaning speech as we usually speak it rather than carefullyspokenspeech).There are three basis uses of Speech Recognition:

1. Command & Control give commands to the system that it will then execute (e.g., "exit

application" or"take airplane 1000 feet higher") usually speaker independent

2. Dictation dictate to a system, which will transcribe your speech into written text usually speaker-dependent

3. Speaker Verification your voice can be used as a biometric (i.e., to identify you uniquely)

Speech input is useful in applications where the use of hands is difficult, either due to the environment or to a user’s disability. It is not appropriate in environments where noise is an issue.Much progress has been made, but we are still a long way from the image we see in science fiction of humans conversing naturally with computers.Choosing DevicesThe preceding pages have described a wide range of input devices. We will now look briefly at some examples which illustrate the issues which need to be taken into account when selecting devices for a system: matching devices with:

the work the users the environment

Matching devices with workExample: Application – panning a small window over a large graphical surface (such as a layout diagram of a processor chip) which is too large to be viewed in detail as a whole. The choice ofdevice is between a trackball and a joystick.This scenario was suggested by Buxton (1986).

Task 1. Panning and zooming

Trackball is better suited as motion of ball is mapped directly to motion over surface, whilemotion of joystick is mapped to speed of motion over surface.No obvious advantage to either device for zooming.

Department Of Computer Science and Engineering Page 51

Page 53: HCI Notes

Human Computer Interaction

Task 2. Panning and zooming simultaneously

Zooming and panning is possible with joystick (just displace and twist) but virtually impossiblewith trackball, so the joystick is better suited.

Task 3 . Locating object by panning and then manipulating by twisting, without letting the position drift.

Difficult to locate accurately and keep stationary with joystick, but could be done with trackball.When trackball is stopped then motion is stopped, and bezel around ball can be rotated.Trackball is therefore better suited.Matching Devices with Users

Example: Eye control for users with severe disabilitiesMost input devices rely on hand movements. When a user is unable to use hands at all for input, then eye-controlled systems provide an alternative. For example, the Iriscom system moves the mouse pointer by tracking a person's eye movement and mouse clicks are performed by blinking.It also has an on-screen keyboard so users can input text. It can be used by anyone who has control of one eye, including people wearing glasses or contact lenses.

A camera takes the place of the touchpadMatching devices with the environmentExample: The BMW i-drive controlThe systems in expensive cars are becoming increasingly computer-controlled. Functions which were previously provided by separate devices with their own controls are often now controlled by a single computer interface. An in-car computer system is in a very different environment from a desktop PC or even a laptop. The driver of the car needs to use its functions (entertainment, climate, navigation, etc) while driving. This requires an interface which allows the driver to operate it with one hand, with the input device close to hand. The output must be clearly visible in a position which minimises the time the driver must spend looking away from the road ahead.BMW’s solution to this problem is the i-drive, which consists of a multifunction control which is situated where a gearlever would usually found. This control allows the driver to select options displayed on a screen mounted high in the dashboard. The I-drive has been controversial, and it is likely that there will be many further developments in the devices used to interact with computers in the driving environment.

Department Of Computer Science and Engineering Page 52

Page 54: HCI Notes

Human Computer Interaction

Department Of Computer Science and Engineering Page 53

Page 55: HCI Notes

Human Computer Interaction

Chapter 8OutputOutput DevicesOutput devices convert information coming from an internal representation in a computer system to a form understandable by humans. Most computer output is visual and two-dimensional. Screens and hard copy from a printer are the most common. These devices have developed greatly in the past two decades, giving greater opportunities for HCI designers:

Screens Larger display areas, more colors and higher resolutions, allowing interfaces to

become more graphical and to present more information. Higher performance graphics adapters allowing detailed 3D visualisations to be

displayed High quality flat panel screens which can fit in laptop and pocket computers. Flat

panel screens can also save space in non-mobile systems, which can be highly significant in environments where large amounts of information must be displayed in a limited physical space, e.g. a stock exchange.

Touch screens, using finger or pen as for input – allow input and output devices to be combined

Printers Speed and quality – the laser printer allowed computers to output high quality text

quickly Colour – inkjet printers made it possible for a wide range of users to produce

colour hard copy Cost – the reduction in cost of printers allows much more flexibility in their use.

In order to increase bandwidth for information reaching the user, it is an important goal to use more channels in addition to visual output. One commonly used supplement for visual information is sound, but its true potential is often not recognized. Audible feedback can make interaction substantially more comfortable for the user, providing unambiguous information about the system state and success or failure of interaction (e. g., a button press), without putting still more load onto the visual channel. Tactile feedback can serve a similar purpose, for example, input devices physically "reacting" to input. This form of output is often used in gaming (force feedback joysticks, steering wheels, etc.) as well as other specialized applications. Purposes of OutputOutput has two primary purposes:

Presenting information to the user Providing the user with feedback on actions

Visual OutputVisual display of text or data is the most common form of output. The key design considerations are that the information displayed be legible and easy to locate and process. These issues relate to aspects of human cognition described in previous chapters. Visual output can be affected by poor lighting, eye fatigue, screen flicker and quality of text characters.

Department Of Computer Science and Engineering Page 54

Page 56: HCI Notes

Human Computer Interaction

Visual FeedbackUsers need to know what is happening on the computer’s side of an interaction. The system should provide responses which keep users well informed and feeling in control. This includes providing information about both normal processes and abnormal situations (errors).

Where possible, a system should be able to: Tell a user where he or she is in a file or process Indicate how much progress has been made Signify that it is the user’s turn to provide input Confirm that input has been received Tell the user that input just received is unsuitable.

Dynamic visualizationsDynamic visualizations are computer controlled visualizations of information. They can allow users to interact with information in a visual way, often allowing complex relationships within the information to be discovered which would be difficult to observe in textual form. The figure below shows a visualization of a ‘social network’ – information about messages passed among a group of students and replies received. The information is shown as a 3D wire frame model, and the user can rotate the model or zoom in and out at will to explore the model.

This is an example of visualisation of external data which comes from a process beyond the computer’s control. Model-based simulation displays information based on a model or simulation under the computer’s control, such as a visualization of a mathematical function.

Department Of Computer Science and Engineering Page 55

Page 57: HCI Notes

Human Computer Interaction

A simulation in Mathematics (www.wolfram.com)

All visualizations require a mapping between the information or model and the way it is displayed. The mapping should be chosen to make the meaning we wish to extract perceptually prominent. For example, in the social network visualisation, the position of the student in the model indicates the number of replies received to messages.Computer-based visualisations have advantages over other forms of visualisation, such as video recordings, for communicating information:

Can be controlled interactively by the user Can change the mappings between the model and the way it is displayed, for

example by changing colour coding Once mappings have been established, can easily produce visualisations of new

situations

Sound OutputSound is of particular value where the eyes are engaged in some other task or where the complete situation of interest cannot be visually scanned at one time, for example:

Applications where eyes and attention are required away form the screen – e.g. flight decks, medical applications, industrial machinery

Applications involving process control in which alarms must be dealt with Applications addressing the needs of blind or partially sighted users. Data sonification – situations where data can be explored by listening, given an

appropriate mapping between data and sound. The Geiger counter is a well-known example of sonification.

Natural soundsGaver (1989) developed the idea of auditory icons. These are natural, everyday sounds which are used to represent actions and objects within an interface. Gaver suggests that sounds are good for providing information on background processes or inner workings without disrupting visual attention. At that time, most computer systems were capable only of very simple beeps. Developments in sound generation in computers have made it possible to play back high quality sampled or synthesised sounds, and a wide variety of natural sounds are used in applications.

Department Of Computer Science and Engineering Page 56

Page 58: HCI Notes

Human Computer Interaction

SpeechSpeech output is one of the most obvious means of using sound for providing users with feedback. Successful speech output requires a method of synthesising spoken words which can be understood by the user. Speech can be synthesised using one of two basic methods:

ConcatenationDigital recordings of real human speech are stored. These can be words, word segments, phrases or sentences. These can be played back later under computer control. This can be very successful when a very limited range of output is required – for example announcements on trains which state the name of the next station.

Synthesis-by-ruleDoes not use recordings. Instead, speech is completely synthesized by the computer using phonemes as building blocks. A phoneme is the smallest contrastive unit in the sound system of a language. This can be useful when large vocabularies are required. It also allows variation in pitch and tone. The W3C is currently working on a specification for Speech Synthesis Markup Language (SSML), an XML base

Department Of Computer Science and Engineering Page 57

Page 59: HCI Notes

Human Computer Interaction

Speech output is not always successful -the infamous 1983 Austin Maestro had digital instruments and a speaking computer. These innovations were not widely copied…

Department Of Computer Science and Engineering Page 58

Page 60: HCI Notes

Human Computer Interaction

Chapter 9Requirements for supportUsers have different requirements for support at different times. User support should be• Available but unobtrusive• Accurate and robust• Consistent and flexibleUser support comes in a number of styles:• Command-based methods• Context-sensitive help• Tutorial help• On-line documentation (integrated with application)• written documentation (manuals or notices)• Web-based documentationDesign of user support must take account of• Presentation• Implementation

Online HelpMany programs come with the instruction manual, or a portion of the manual, integrated into the program. If you encounter a problem or forget a command while running the program, you can summon the documentation by pressing a designated Help key or entering a HELP command once you summon the Help system, the program often displays a menu of Help topics. You can choose the appropriate topic for whatever problem you are currently encountering. The program will then display a help screen that contains the desired documentation.Some programs are more sophisticated, displaying different Help messages depending on where you are in the program. Such systems are said to be context sensitive.There has been a large body of research done to try to understand how users interact with online help. One aspect which has been studied is the kind of questions which prompt use of online help (O’Malley 1986, Sellen & Nicol 1990). Typical questions appear to focus on:• Goal exploration – what can I do with this system?• Definition and description – what is this? What is it for?• Task achievement – how do I do this?• Diagnostic – how did that happen?• State identification – where am I?Most desktop applications and operating systems now have comprehensive online help systems, and a range of tools are avaible to make the process of creating help systems relatively easy. However, the system designer must take account of the above question types to make sure that the system is actually addressing users’ needs.BSc Applied Computing Human Computer Interaction: 9. User Support

Department Of Computer Science and Engineering Page 59

Page 61: HCI Notes

Human Computer Interaction

‘Clippy’ – the much-loved Microsoft Office AssistantTraining WheelsNew users of high-function application systems can become frustrated and confused by the errors they make in the early stages of learning. A training interface for a commercial word processor (Training Wheels) was designed by Carroll (1984) to make typical and troublesome error states “unreachable,” thus eliminating the sources of some new-user learning problems. Creating a training environment from the basic function of the system itself afforded substantially faster learning coupled with better learning achievement and better performance on a comprehension post-test. A control group spent almost a quarter of their time recovering from the error states that the training interface blocked off.The principle of allowing access to different levels of functionality in an application depending on their level of expertise has been used in a variety of applications since then. However, this approach can lead to frustration for users when they are told ‘option not available’

Department Of Computer Science and Engineering Page 60