usability evaluations (part 2)

35
IM2044 – Week 8: Lecture Dr. Andres Baravalle 1

Upload: andres-baravalle

Post on 13-Jan-2015

159 views

Category:

Education


2 download

DESCRIPTION

Week 8 lecture for im2044 2012-2013

TRANSCRIPT

Page 1: Usability evaluations (part 2)

IM2044 – Week 8: LectureDr. Andres Baravalle

1

Page 2: Usability evaluations (part 2)

Interaction design

• The next slides are (very loosely) based on the companion slides for the textbook

• By the end of this week, you should have studied all chapters of the textbook up to chapter 11

• Today we will covering chapter 14.

2

Page 3: Usability evaluations (part 2)

Outline

• Usability testing (part 2)– Usability testing scenarion: OpenSMSDroid

• Usability inquiry

3

Page 4: Usability evaluations (part 2)

Usability testing• When: common for comparison of products or

prototypes• Tasks & questions focus on how well users

perform tasks with the product– Focus is on time to complete task & number & type of

errors

• Data collected by video & interaction logging• Experiments are central in usability testing

– Usability inquiry tends to use questionnaires & interviews

4

Page 5: Usability evaluations (part 2)

Testing conditions• Usability lab or other controlled space• Emphasis on:

– Selecting representative users– Developing representative tasks

• Small sample (5-10 users) typically selected• Tasks usually last no longer than 30 minutes• The test conditions should be the same for every

participant

5

Page 6: Usability evaluations (part 2)

Some type of data Time to complete a task Time to complete a task after a specified time

away from the product Number and type of errors per task Number of errors per unit of time Number of navigations to online help or manuals Number of users making a particular error Number of users completing task successfully

6

Page 7: Usability evaluations (part 2)

How many participants is enough for user testing?• The number is a practical issue

• Depends on:– Schedule for testing– Availability of participants– Cost of running tests

• Typically 5-10 participants– Some experts argue that testing should

continue with additional users until no new insights are gained

7

Page 8: Usability evaluations (part 2)

Experiments & usability testing

• An experiment is “a method of testing - with the goal of explaining - the nature of reality” (Wikipedia, 2011)

• Experiments test hypotheses to discover new knowledge by investigating the relationship between two or more things – i.e., variables. – Experiments may used in usability testing

8

Page 9: Usability evaluations (part 2)

Usability testing & research

Usability testing• Improve products• Few participants

(typically)• Results inform design• Conditions controlled as

much as possible• Procedure planned• Results reported to

developers

Experiments for research

• Discover knowledge• Many participants• Results validated

statistically • Strongly controlled

conditions• Experimental design• Scientific report to

scientific community

9

Page 10: Usability evaluations (part 2)

Experiments

• Predict the relationship between two or more variables.

• Independent variable is manipulated by the researcher– Dependent variable depends on the

independent variable– Typical experimental designs have one or two

independent variable

• Validated statistically & replicable

10

Page 11: Usability evaluations (part 2)

Experimental designs

• Different participants - single group of participants is allocated randomly to each of the experimental conditions– Different participants perform in different conditions

• Same participants - all participants appear in both conditions

• Matched participants - participants are matched in pairs, e.g., based on expertise, gender, etc.

11

Page 12: Usability evaluations (part 2)

Different, same, matched participant design

Design Advantages Disadvantages

Different No order effects Many subjects & individual differences a problem

Same Few individuals, no individual differences

Counter-balancing needed because of ordering effects

Matched Same as different participants but individual differences reduced

Cannot be sure of perfect matching on all differences

12

Page 13: Usability evaluations (part 2)

Examples

• The next slides describe 2 experiments: the one behind the book Prioritizing Web Usability and a fictional one on OpenSMSDroid

• Both use Thinking Aloud and video/screen recording for data collection

13

Page 14: Usability evaluations (part 2)

Prioritizing Web Usability• Prioritizing Web Usability (Nielsen and Loranger, 2006)

used the Thinking Aloud method to collect insight on user behaviour:– 69 users, all with at least one year experience in using the web– Broad range of job backgrounds and web experience – but no

one working in IT or marketing– 25 web sites tested with specific tasks– Windows desktops with 1024x768 resolution running Internet

Explorer– Recordings of monitor and upper body for each session– Broadband speed between 1 and 3 Mbps

14

Page 15: Usability evaluations (part 2)

Prioritizing Web Usability (2)

• The tasks that the users were asked to perform included:– Go to ups.com and find how much does it cost to

send a postcard to China– You want to visit the Getty Museum this weekend. Go

to getty.edu and find opening times/prices– Go to nestle.com and find a snack to eat during

workouts– Go to bankone.com and find best savings account

with a $1000 balance

15

Page 16: Usability evaluations (part 2)

Prioritizing Web Usability (3)

• The result of the research is presented as a book:– Categorising the finding in categories

(including searching, navigation, typography and writing style)

– Using plenty of examples and screenshots to demonstrate the usability issues that were identified

16

Page 17: Usability evaluations (part 2)

Prioritizing Web Usability: findings• People succeed 66% of the time when

working on “single site” activities and 60% of the time when having to browse through the internet for information

17

Page 18: Usability evaluations (part 2)

Prioritizing Web Usability: findings (2)• Experienced users spend about 25

seconds in a homepage and 45 in an interior page (35 and 60 for inexperienced users)

• Only 23% of users scroll on their first visit of a homepage – The number decreases– The average scroll for first visit is 0.8 of a

screen

18

Page 19: Usability evaluations (part 2)

Prioritizing Web Usability: findings (3)• 88% of users go to search engines to find

information

• Font face and size: different font faces for print and screen – Different font size depending on target

audience

• More in the book…

19

Page 20: Usability evaluations (part 2)

OpenSmsDroid evaluation

• You have been tasked to evaluate the usability for a new (fictional) Android application to write short text messages, OpenSMSDroid

• You have decided to set up an experiment– The next experiment is (loosely) adapted from

“Experimental Evaluation of Techniques for Usability Testing of Mobile Systems in a Laboratory Setting” (Beck, Christiansen, Kjeldskov, Kolbe and Stage, 2003)

20

Page 21: Usability evaluations (part 2)

OpenSmsDroid evaluation

• Your test users will be perform a set of tasks in specific configurations using the thinking aloud method for data collection– A constraint of 5 minutes has been set for

each of the tasks– The usability researcher will record the

session and take notes

21

Page 22: Usability evaluations (part 2)

OpenSmsDroid evaluation: testing configurations• Configurations for the test (tentative list):

– Sitting on a chair at a table– Walking on a treadmill at constant speed– Walking on a treadmill at varying speed– Walking on an 8-shaped course that is changing as

obstructions are being moved, within 2 meters of a person that walks at constant speed

– Walking on an 8-shaped course that is changing as obstructions are being moved, within 2 meters of a person that walks at varying speed

– Walking in Westfield Stratford at 16:00 on Saturday

22

Page 23: Usability evaluations (part 2)

OpenSmsDroid evaluation: testing configurations (2)• For practical reasons and after reviewing the

literature, these settings have been selected for this evaluation:– Sitting on a chair at a table– Walking on a treadmill at constant speed– Walking in Westfield Stratford at 16:00 on

Saturday

23

Page 24: Usability evaluations (part 2)

OpenSmsDroid evaluation: tasks• Writing a new SMS containing the phrase “The quick

brown fox jumps over the lazy dog” repeated 2 times to an existing contact (without using predictive text features)

• Writing a new SMS containing the phrase “The quick brown fox jumps over the lazy dog” repeated 2 times to an existing contact (using predictive text features)

• Taking a picture and sending it to an existing contact• Taking a short 1 minute video and sending it to an

existing contact

24

Page 25: Usability evaluations (part 2)

OpenSmsDroid evaluation: tasks (2)• In each test, you can collect:

– Quantitative data: time needed to perform the task, and if the task has been completed

– Qualitative data: asking the user to think aloud while interacting with the device and recording the interaction

25

Page 26: Usability evaluations (part 2)

OpenSmsDroid evaluation: data analysis• The evaluation will analyse the data

collected and report on any findings, informing on any difference in performance and suggesting possible changes to the interface– An experiment can also generate further

hypothesis which will be used in further experiments

26

Page 27: Usability evaluations (part 2)

Usability inquiry

• Usability inquiry methods focus (at different degrees) on analysing an artefact either from “the native point of view" or looking for “the native point of view"– Used to obtain information about users' likes,

dislikes, needs, and understanding of the system

27

Page 28: Usability evaluations (part 2)

Usability inquiry (2)

• They may use one or more of these tecniques:– Talking to users– Observing users using a system in a real

working situation– Letting the users answer questions (verbally

or in written form)

28

Page 29: Usability evaluations (part 2)

Data collection & analysis

• Data collection (most methods described in the previous weeks):– Observation & interviews (e.g. contextual inquiry)– Notes, pictures, recordings, diaries– Video– Logging

• Analyses– Categorizing the findings– Using existing categories can be provided by pre-

existing research

29

Page 30: Usability evaluations (part 2)

Diary method

• The diary method requires users to keep a diary of their interactions

• Diaries can be free form or structured– The diary method is best used when the

researcher does not have the time, the resources or the possibility to use user monitoring methods or when the level of detail provided by user monitoring methods is not needed

30

Page 31: Usability evaluations (part 2)

Contextual inquiry

• Contextual inquiry is a structured field interviewing method which typically evaluates:– User opinions – User experience – Motivation– Context

• It is a study based on dialogue and interaction between interviewee and user, and it is one of the best methods to use when researchers need to understand the users' work context.

31

Page 32: Usability evaluations (part 2)

Data presentation

• The aim is to show how the products are being appropriated and integrated into their surroundings.

• Typical presentation forms include: vignettes, excerpts, critical incidents, patterns, and narratives.

32

Page 33: Usability evaluations (part 2)

Interviews and focus groups

• Interviews and focus groups are research methods based on interaction between researchers and users– The researcher facilitates the discussion

about the issues rose by the questions– In focus groups (multiple users present), the

interaction among the users may raise additional issues, or identify common problems that many persons experience

33

Page 34: Usability evaluations (part 2)

Surveys

• Surveys are a quantitative research method, where a set list of questions are asked and the users' responses recorded– When the questions are administered by a

researcher, the survey is called a structured interview

– When the questions are administered by the respondent, the survey is referred to as a questionnaire

34

Page 35: Usability evaluations (part 2)

References

• Beck, E., Christiansen, M., Kjeldskov, J., Kolbe, N. and Stage, J. (2003). ‘Experimental Evaluation of Techniques for Usability Testing of Mobile Systems in a Laboratory Setting’, OzCHI 2003.

• Nielsen, J. and Loranger, H. (2006). Prioritizing Web Usability.

35