risk-based testing - designing & managing the test process (2002)

149
sion 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 1 Risk-Based Testing Paul Gerrard & Neil Thompson Systeme Evolutif Limited 3 rd Floor, 9 Cavendish Place London W1M 9DL, UK Tel: +44 (0)20 7636 6060 Fax: +44 (0)20 7636 6072 email: [email protected] http://www.evolutif.co.uk/ Thompson information Systems Consulting Limited Thompson information Systems Consulting Limited 23 Oast House Crescent Farnham, Surrey GU9 0NP, UK Tel: +44 (0)7000 NeilTh (634584) Fax: +44 (0)7000 NeilTF (634583) email: [email protected] http://www.TiSCL.com/

Upload: neil-thompson

Post on 20-Aug-2015

1.035 views

Category:

Technology


2 download

TRANSCRIPT

Page 1: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 1

Risk-Based Testing

Paul Gerrard & Neil Thompson

Systeme Evolutif Limited3rd Floor, 9 Cavendish PlaceLondon W1M 9DL, UKTel: +44 (0)20 7636 6060Fax: +44 (0)20 7636 6072email: [email protected]://www.evolutif.co.uk/

Thompson informationSystemsConsulting Limited

Thompson information Systems Consulting Limited23 Oast House CrescentFarnham, Surrey GU9 0NP, UKTel: +44 (0)7000 NeilTh (634584)Fax: +44 (0)7000 NeilTF (634583)email: [email protected]://www.TiSCL.com/

Page 2: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 2

Agenda I Introduction II Risk-Based Test Strategy III Risk Based Test Planning and Organisation IV Managing Test Execution and the End-

Game V Close, Q&A

Here’s the commercial bit:– Most of the material presented today is based on:– Risk-Based E-Business Testing, Gerrard and

Thompson,Artech House, 2002

– Visit www.riskbasedtesting.com for samples.

Page 3: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 3

Paul GerrardSysteme Evolutif are a software testing consultancy specialising in E-Business testing, RAD, test process improvement and the selection and implementation of CAST Tools. Evolutif are founder members of the DSDM (Dynamic Systems Development Method) consortium, which was set up to develop a non-proprietary Rapid Application Development method. DSDM has been taken up across the industry by many forward-looking organisations.

Paul is the Technical Director and a principal consultant for Systeme Evolutif. He has conducted consultancy and training assignments in all aspects of Software Testing and Quality Assurance. Previously, he has worked as a developer, designer, project manager and consultant for small and large developments. Paul has engineering degrees from the Universities of Oxford and London, is Co-Programme Chair for the BCS SIG in Software Testing, a member of the BCS Software Component Test Standard Committee and Former Chair of the IS Examination Board (ISEB) Certification Board for a Tester Qualification whose aim is to establish a certification scheme for testing professionals and training organisations. He is a regular speaker at seminars and conferences in Europe and the US, and won the ‘Best Presentation’ award at EuroSTAR ’95.

Page 4: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 4

Neil ThompsonNeil Thompson is the Director of Thompson information Systems Consulting Ltd, a company he set up in 1998 as a vehicle for agile, impartial consultancy and management to blue-chip clients, sometimes in association with other consultancies such as Systeme Evolutif. Neil’s background is in programming, systems analysis and project management. He has worked for a computer hardware manufacturer, two software houses, an international user organisation and two global management consultancies, and currently feels fulfilled as a specialist in software testing.

He is a Certified Management Consultant (UK), and a member of the British Computer Society (BCS) and the IEEE. Neil is active in the BCS Specialist Interest Groups in Software Testing and Configuration Management. He spoke at the first and second EuroSTAR conferences in 1993-4, and again in 1999.

Neil studied Natural Sciences at Cambridge University (BA & MA degrees). He holds the ISEB Foundation Certificate in software testing and will be taking the Practitioner examination in December.

Page 5: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 5

Part IIntroduction

Page 6: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 6

Part 1 - Agenda How Much Testing is enough? When is the Product Good Enough? Introduction to Risk The V-Model and W-Model.

Page 7: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 7

How Much Testing is Enough?

(can risks help?)

Page 8: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 8

“I need six testers for eight weeks…”

The Project manager says– “Four testers for six weeks and that’s it!”

The testers say– It’ll take longer than six weeks– It’ll cost more than the budget allocated– It’s too big/complicated/risky for us to do

properly– “It’s just not enough”

Was it ever possible to do “enough” testing in these circumstances?

Page 9: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 9

Testing is time delimited…always

No upper limit to the amount of testing Even in the highest integrity environments,

time and cost limit what we can do Testing is about doing the best job we can in

the time available Testers should not get upset, if our estimates

are cut down (or ignored) The benefits of early release may be worth

the risk But who knows the status of the risks and

benefits?

Page 10: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 10

Did any tester ever get enough time to test? No, of course not We need to separate the two

responses:– the knee-jerk reaction (a back-covering

exercise prior to the post-disaster “I told you so!”)

– a rational evaluation of the risks and response to doing ‘too little’

Often, just like cub-scouts, testers “promise to do their best” and don’t make waves.

Page 11: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 11

Risk-based test planning If every test aims to address a risk, tests

can be prioritised by risk It’s always going to take too long so…

– Some tests are going to be dropped– Some risks are going to be taken

Proposal:– The tester is responsible for making the

project aware of the risks being taken– Only if these risks are VISIBLE, will

management ever reconsider.

Page 12: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 12

So how much testing is enough? Enough testing has been planned when the

stakeholders (user/customer, project manager, support, developers) approve:

TESTS IN SCOPE– They address risks of concern and/or give

confidence THE TESTS THAT ARE OUT OF SCOPE

– Risk is low OR these tests would not give confidence

The amount and rigour of testing is determined by CONSENSUS.

Page 13: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 13

When is the Product“Good Enough”?

Page 14: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 14

Compulsive behaviour Consultants, ‘gurus’, academics preach

perfection through compulsive behaviour:

» product perfection through process maturity and continuous process improvement

» all bugs are bad, all bugs could be found, so use more and more rigorous/expensive techniques

» documentation is always worthwhile» you can’t manage what you can’t count» etc. etc..

Process hypochondriacs can’t resist.

Page 15: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 15

* James Bach, “Good Enough Quality: Beyond the Buzzword”, Computer, August 1997Web site: www.satisfice.com

“Good Enough” James Bach* is main advocate of the

‘Good Enough’ view (also Yourdon and others)

A reaction to compulsive formalism– if you aim at perfection, you’ll never

succeed– your users/customers/businesses live in

the real world, why don’t you?– compromise is inevitable, don’t kid

yourself– guilt and fear should not be part of the

process.

Page 16: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 16

“Good Enough” means:1. X has sufficient benefits2. X has no critical problems3. Benefits of X sufficiently

outweigh problems

4. In the present situation, and all things considered, improving X would cause more harm than good.

All the above must apply

Page 17: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 17

Contribution of testing to the release decision

Have sufficient benefits been delivered?– Tests must at least demonstrate that features

providing benefits are delivered completely Are there any critical problems?

– Test records must show that any critical problems have been corrected, re-tested, regression tested

Is our testing Good Enough?– Have we provided sufficient evidence to be

confident in our assessment?

Page 18: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 18

Who makes the release decision?

“Good enough” is in the eye of the stakeholder

Testers can only say:– “at the current time, our tests demonstrate

that:» the following features/benefits have been delivered» the following risks have been addressed» these are the outstanding risks of release...”

Stakeholders and management, not the tester, can then make the decision.

Page 19: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 19

Introduction to Risk

Page 20: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 20

The definition of risk Italian dictionary: Risicare, “to dare” Simple generic definition:

–“The probability that undesirable events will occur”

In this tutorial, we will use this definition:

“A risk threatens one or more of a project’s cardinal objectives and has an uncertain probability”.

Page 21: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 21

Some general statements about risk

Risks only exist where there is uncertainty

If the probability of a risk is zero or 100%, it is not a risk

Unless there is the potential for loss, there is no risk (“nothing ventured, nothing gained”)

There are risks associated with every project

Software development is inherently risky.

Page 22: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 22

Cardinal Objectives The fundamental objectives of the

system to be built Benefits of undertaking the project Payoff(s) that underpin and justify

the project

Software risks are those that threaten the Cardinal Objectives of a project.

Page 23: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 23

Three types of software risk

Project Riskresource constraints, external interfaces,

supplier relationships, contract restrictions

Process Riskvariances in planning and

estimation, shortfalls in staffing, failure to track progress, lack of quality

assurance and configuration management

Primarily a management responsibility

Planning and the development process are the main issues here.

Product Risklack of requirements stability, complexity,

design quality, coding quality, non-functional

issues, test specifications.

Requirements risks are the most significant risks reported in risk assessments.

Testers are mainly

concerned with Product Risk

Page 24: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 24

Quantitative or qualitative Risks can be quantified:

– Probability = 50%– Consequence or loss £100,000– Nice to use numbers for

calculations/comparisons– But we deal in perceived risk, not absolute

risk Or qualified:

– Probability is high medium or low– Consequence is critical, moderate, low– More accessible, usable in discussions, risk

workshops.

Page 25: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 25

Uncertainty Consequence

– Do you know the consequences of failure?– Need user input to determine this– Some costs could be calculated but others are

intangible (credibility, embarrassment etc.) Probability

– The toughest one to call– Crystal ball gazing

In conducting a risk assessment, make it clear what level of uncertainty you are working with

Testing (and test results) can reduce uncertainty.

Page 26: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 26

The V-Model and W-Model

Page 27: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 27

Requirements

FunctionalSpecification

PhysicalDesign

ProgramSpecification

UserAcceptance

Test

SystemTest

IntegrationTest

UnitTest

V-Model

Is there ever a one-to-one relationship between baseline documents and testing?

Where is the static testing (reviews, inspections, static analysis etc.)?

Page 28: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 28

WriteRequirements

SpecifySystem

DesignSystem

Test theRequirements

Test theSpecification

Test theDesign

UnitTest

AcceptanceTest

SystemTest

IntegrationTest

InstallSystem

BuildSystem

BuildSoftware

WriteCode

W-Model

Page 29: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 29

WriteRequirements

SpecifySystem

DesignSystem

Test theRequirements

Test theSpecification

Test theDesign

UnitTest

AcceptanceTest

SystemTest

IntegrationTest

InstallSystem

BuildSystem

BuildSoftware

WriteCode

Reviews

Inspections

ScenarioWalkthroughs

Early TestCase Preparation

InspectionStatic

Analysis

RequirementsAnimation

W-Model and static testing

Page 30: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 30

WriteRequirements

SpecifySystem

DesignSystem

Test theRequirements

Test theSpecification

Test theDesign

UnitTest

AcceptanceTest

SystemTest

IntegrationTest

InstallSystem

BuildSystem

BuildSoftware

WriteCode

SecurityTesting

PathTesting

PerformanceTestingUsability

Testing

BusinessIntegration

TestingSystem

IntegrationTesting

EquivalencePartitioning

BoundaryValue Testing

Exploratory Testing

W-Model and dynamic testing

Page 31: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 31

What do we mean by a (work) product?

Project documents:– schedule, quality plan, test strategy,

standards Deliverables:

– requirements, designs, specifications, user documentation, procedures

– software: custom built or COTS components, sub-systems, systems, interfaces

– infrastructure: hardware, O/S, network, DBMS– transition plans, conversion software,

training...

Page 32: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 32

What do we mean by testing?

Testing is the process of evaluating the deliverables of a software project– detect faults so they can be removed– demonstrate products meet their

requirements– gain confidence that products are ready for

use– measure and reduce risk

Testing includes:– static tests: reviews, inspections etc.– dynamic tests: unit, system, acceptance

tests etc.

Page 33: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 33

Part IIRisk Based Test Strategy

Page 34: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 34

Part II - Agenda Risk Management Process The Role of Testing in Product Risk

Management Identifying Benefits and Risks From Risks to Test Objectives Generic Test Objectives and System

Requirements Test Objectives, Coverage and

Techniques

Page 35: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 35

Risk Management Process

Page 36: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 36

Process

Risk identification– what are the risks to be addressed?

Risk analysis– nature, probability, consequences, exposure

Risk response planning– pre-emptive or reactive risk reduction

measures Risk resolution and monitoring Stakeholders should be involved at all

stages.

Page 37: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 37

Assessing consequences (loss)

Severity Description Score

Critical business objectives cannot be accomplished 5

High business objectives undermined 4

Moderate business objectives affected 3

Low slight effect on business 2

Negligible no noticeable effect 1

Page 38: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 38

Assessing probability (likelihood)

Probability Description Score

>80% almost certainly, highly likely 5

61-80% probable, likely, we believe 4

41-60% better than even, 50/50, we doubt, improbable 3

21-40% unlikely, probably not 2

1-20% chances are slight, highly unlikely 1

Page 39: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 39

Risk exposure Risks with the highest exposure are

those of most concern Worst case scenarios drive concerns Risk EXPOSURE is calculated as the

product of the PROBABILITY and CONSEQUENCE of the risk

A simple notation is L2

– where L2 = LIKELIHOOD x LOSS.

Page 40: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 40

What do the numbers mean? Sometimes you can use numeric

assessments– We may have experience that tells us

» Likelihood is high (it always seems to happen)» Loss is £50,000 (that’s what it cost us last time)

But often, we are guessing– Use of categories help us to compare risks– Subjective perceptions (never the same)– E.g. Developers may not agree with users on

probability! Maybe you can only assign risk RAG

numbers– RED, AMBER, GREEN

The ability to compare is what is most important.

Page 41: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 41

The danger slope

Hig

hly

U

nlik

ely

Un

likel

y

Imp

rob

ab

le

Lik

ely

Ve

ry L

ike

ly

Critical

High

Moderate

Low

Negligible

Whe

re w

e wan

t to

move

all ri

sks

Page 42: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 42

Risk response planning Do nothing! Pre-emptive risk reduction measures

– information buying– process model– risk influencing– contractual transfer

Reactive risk reduction measures– contingency plans– insurance

The risk that’s left is the residual risk.

Where testing fits in

Page 43: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 43

Role of Testing inProduct Risk Management

Page 44: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 44

Faults, failure and risk System failures are what we fear The faults that cause failures are

our prey Uncertainty is what makes us

concerned:– what type of faults are present in the

system?– how many faults are in the system?– did testing remove all the serious

faults? Testing helps us to address these

uncertainties.

Page 45: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 45

Testing helps to reduce risk If risk assessment steers test

activity– we design tests to detect faults– we reduce the risks caused by faulty

products Faults found early reduce rework,

cost and time lost in later stages Faults found are corrected and re-

tested and so the quality of all products is improved.

Page 46: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 46

Testing can measure risk Testing is a measurement activity Tests that aim to find faults provide

information on the quality of the product– which parts of the software are faulty– which parts of the software are not faulty

Tests help us understand the risk of release

Understanding the risks helps us to make a risk-based decision on release

After testing, our risk assessment can be refined.

Page 47: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 47

The test passes… The risk could be unchanged because:

Risk probability higher because:

Risk probability lower because:

Risk consequence higher because:

Risk consequence lower because:

Page 48: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 48

The test fails… The risk could be unchanged because:

Risk probability higher because:

Risk probability lower because:

Risk consequence higher because:

Risk consequence lower because:

Page 49: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 49

Identifying Benefits and Risks

Page 50: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 50

Business benefits (cardinal objectives)

At the highest level, normally set out in a project initiation document or business case etc.

A benefit is any 'good thing' required to be achieved by a project

Normally expressed in financial/business terms e.g.

Save money - one or a combination of– cut staff, stock, work in progress, time to

deliver… Increase revenues - one or a combination of

– increase market share, launch new product, improve existing product, increase margins, exploit a new market…

Page 51: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 51

Risk identification Expert interviews Independent consultant or domain

expert assessment Past experience (lessons learned) Checklists of common risks (risk

templates) Risk workshops Brainstorming.

Page 52: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 52

Risk workshops Brainstorming sessions can be very

productive Make risks visible Generates risk ideas Generates ideas for resolution Starts buy-in.

Page 53: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 53

Exercise Main features of an ATM:

– Validation of a customers card and PIN– Cash withdrawal– On-line balance request– Request a statement

…amongst others.

Page 54: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 54

What kind of failures could occur for each of the four requirements?

Failure BankPOV

Cust.POV

Page 55: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 55

Risks and viewpoints Viewpoint has an influence over

which risks are deemed important Developer/supplier viewpoint

– what stops us getting paid for the system?

Customer/service provider viewpoint– what could lose us business, money?

Customer viewpoint– what could lose ME money?

Page 56: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 56

The testers viewpoint Typically, testers represent either

suppliers or their customers Main stakeholders in the project:

– system supplier– customer/buyer– service provider and support– end-users

Testers may work for one stakeholder, but should consider concerns of all.

Page 57: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 57

From Risks to Test Objectives

Page 58: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 58

Why use risks to define test objectives?

If we focus on risks, we know that bugs relating to the selected mode of failure are bound to be important.

If we focus on particular bug types, we will probably be more effective at finding those bugs

If testers provide evidence that certain failure modes do not occur in a range of test scenarios, we will become more confident that the system will work in production.

Page 59: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 59

Risks as failure modes or bug types

Risks describe ‘what we don’t want to happen’

Typical modes of failure:– calculations don’t work– pages don’t integrate– performance is poor– user experience is uncomfortable

Think of them as ‘generic bug types’.

Page 60: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 60

Defining a test objective from risk

We ‘turn around’ the failure mode or risk Risk:

– a BAD thing happens and that’s a problem for us

Test objective:– demonstrate using a test that the system

works without the BAD thing happening The test:

– execute important user tasks and verify the BAD things don’t happen in a range of scenarios.

Page 61: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 61

Risks and test objectives - examples

Risk Test Objective

The web site fails to function correctly on the user’s client operating system and browser configuration.

To demonstrate that the application functions correctly on selected combinations of operating systems and browser version combinations.

Bank statement details presented in the client browser do not match records in the back-end legacy banking systems.

To demonstrate that statement details presented in the client browser reconcile with back-end legacy systems.

Vulnerabilities that hackers could exploit exist in the web site networking infrastructure.

To demonstrate through audit, scanning and ethical hacking that there are no security vulnerabilities in the web site networking infrastructure.

Page 62: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 62

Generic Test Objectives and System Requirements

Page 63: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 63

Risk-based test objectives are usually not enough

Other test objectives relate to broader issues– contractual obligations– acceptability of a system to its users– demonstrating that all or specified functional

or non-functional requirements are met– non-negotiable test objectives might relate to

mandatory rules imposed by an industry regulatory authority and so on

Generic test objectives complete the definition of your test stages.

Page 64: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 64

Generic test objectivesTest Objective Typical Test Stage Demonstrate component meets requirements Component Testing Demonstrate component is ready for reuse in larger sub-system

Component Testing

Demonstrate integrated components correctly assembled/combined and collaborate

Integration testing

Demonstrate system meets functional requirements Functional System Testing

Demonstrate system meets non-functional requirements Non-Functional System Testing

Demonstrate system meets industry regulation requirements

System or Acceptance Testing

Demonstrate supplier meets contractual obligations (Contract) Acceptance Testing

Validate system meets business or user requirements (User) Acceptance Testing

Demonstrate system, processes and people meet business requirements

(User) Acceptance Testing

Page 65: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 65

Tests as demonstrations “Demonstrate” is most often used in test

objectives Better than “Prove” which implies

mathematical certainty (which is impossible)

But is the word “demonstrate” too weak?– it represents exactly what we will do– we provide evidence for others to make a

decision– we can only run a tiny fraction of tests

compared to what is possible– so we really are only doing a demonstration

of a small, sample number of tests.

Page 66: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 66

But tests should aim to locate faults, shouldn't they?

The tester’s goal: to locate faults We use boundary tests, extreme values,

invalid data, exceptional conditions etc. to expose faults:– if we find faults these are fixed and re-tested– we are left with tests that were designed to

detect faults, some did detect faults, but do so no longer

We are left with evidence that the feature works correctly and our test objective is met

No conflict between:– strategic risk-based test objectives and– tactical goal of locating faults.

Page 67: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 67

Testing and meeting requirements

Risk-based test objectives do not change the methods of test design much

Functional requirements– We use formal or informal test design

techniques as normal Non-functional requirements

– Test objectives are often detailed enough to derive specific tests.

Page 68: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 68

Test Objectives, Coverage and Techniques

Page 69: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 69

Risks can be used to… Determine the ‘level’ of testing to be

performed– Four risk levels typically used by most safety-

related standards– Specific test case design techniques and test

completion criteria mandated for each level of risk

Determine the ‘type’ of testing to be performed– E.g. performance, usability, security

backup/recovery testing all address different types of risk.

High risk components might be subjected to:– Code inspections, static analysis and formal

component testing to 100% modified decision coverage.

Page 70: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 70

Example risk – test objective -techniques Risk: feature xxx fails to calculate an

insurance premium correctly Potential test techniques:

– Review or inspect requirements for xxx– Review or inspect specification of xxx– Review or inspect design of xxx– Review or inspect component(s)– Code inspection, static analysis– Defined tests at component, system or

acceptance level We choose the most effective, practical

technique that can be performed as early as possible.

Page 71: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 71

Test objectives and coverage

Easy to derive a test objective from a risk But it is less easy to derive an objective

coverage measure to plan for “How much testing (how many tests) are

required to provide enough information to stakeholders that a risk has been addressed”?

Well documented coverage measures for both black box and white box test design techniques but how can we use these?

Page 72: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 72

Black box test coverage Some techniques subsume others

e.g.– boundary value analysis (BVA)

subsumes equivalence partitioning (EP) and is ‘stronger’

Other test techniques (e.g. decision tables, state-transition testing, syntax testing etc.) do not fit into such an ordered hierarchy

These (and other) techniques have specific areas of applicability.

Page 73: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 73

White box test techniques Most are based on the path-testing

model More ‘inclusive’ ordered hierarchy:

– statement testing (weakest)– branch testing– modified condition decision testing– branch condition combination testing

(strongest) Although costs vary, there is little data

available that compares their cost-effectiveness

British Standard BS 7925-2 available at www.testingstandards.co.uk has all definitions.

Page 74: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 74

Selecting the ‘right’ test techniques

The tester should explain how these are used, the potential depth and consequent cost of using them to the stakeholders

Some test techniques are less formal, or not yet mature enough to have defined coverage levels

For example, coverage targets for configuration testing and ethical hacking as means of detecting security flaws are likely to be subjective.

Page 75: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 75

Selecting the right coverage level Stakeholders will PAY so much to address a

risk Testers need to reconcile the concerns of

stakeholders to the coverage measure to be used– The tester may need to assess the quantity of

testing that could be achieved in the time available

– May also need to offer stakeholders degrees of coverage and quote different costs of each

Tester needs to explain both quantitatively and qualitatively techniques and coverage to be used

The stakeholders must then take a view on which of these coverage targets will be adequate.

Page 76: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 76

Part IIIRisk Based Test Planning

and Organisation

Page 77: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 77

Part III - Agenda Designing the Test Process Stages, Teams and Environments Estimation Planning and Scheduling Test Specification and Traceability.

Page 78: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 78

Designing the Test Process

Page 79: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 79

RiskIdentification

• Consult business, technical staff

• Prepare a draft register of risks

RiskAnalysis

RiskResponse

TestScoping

Test ProcessDefinition

• Discuss risks

• Assign probability and consequence scores

• Calculate exposure

• Formulate test objectives, select test technique

• Document dependencies, requirements, costs, timescales for testing

• Assign Test Effectiveness score

• Nominate responsibilities

• Agree scope of risks to be addressed by testing

• Agree responsibilities and budgets

• Draft the test process from the Test Process Worksheet

• Complete test stage definitions

Tester Activity

Workshop

Tester Activity

Review and Decision

Tester Activity

Master Test Planning process

Page 80: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 80

Internet banking exampleCustomer workstation HTTP dispatcher

Web server(s)

firewall

Application Server(s)

printer

Billing productionservers

HTTPS HTTPS

RMI/IIOPRMI/IIOP

MQ, RMI/IIOP MQ/Series

ExistingBackends

Page 81: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 81

Example A bank aims to offer an internet based

service to give its corporate customers– balance and account information– multi-account management and inter

account transfers– payments

How many ways could this system fail? Which should be considered for testing?

Page 82: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 82

Failure mode and effects analysis FMEA is a method for:

– assessing the risk of different failure modes

– prioritising courses of action (in our case, testing)

We identify the various ways in which a product can fail

For each failure mode we assign scores

Can use these profiles to decide what to do.

Page 83: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 83

Profiling failure modes What is the consequence of the

failure? What is the probability of the failure

occurring if no action is taken? What is the probability of detecting

this with testing?– How would you detect the problem

that causes the failure?

Page 84: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 84

Failure mode profiling Each mode is scored

– P: probability of occurrence (1 low - 5 high)

– C: consequence (1 low -5 high)– T: test effectiveness (1 low - 5 high)

P x C x T = Risk Number.

Page 85: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 85

Failure mode risk numbers If the range of scores is 1-5 Maximum risk number = 125

– implies it’s a very serious risk but could be detected with testing

Minimum risk number = 1– implies it’s a low risk, and would not be

detected with testing

Risk numbers indicate your testing priorities.

Page 86: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 86

Test process worksheet Failure Mode or

Objective

Pro

ba

bility

Co

nse

qu

en

ce

Te

st Effe

ctiven

ess

RIS

K N

um

ber

Pro

toty

pin

g

Infra

structu

re

Su

b-S

ystem

Ap

plic

ation

Sy

stem

No

n-F

un

ctio

nal

Te

sts

User A

ccep

tan

ce

Op

eration

al A

ccep

tan

ce (B

TS

)

Liv

e Co

nfid

en

ce

Cu

sto

mer L

ive Tria

l

Test Technique

Client Platform

1 Which browsers, versions and O/S platforms will be supported, includes non-frames, non-graphic browsers etc.)?

SS

2 New platforms: Web TV, Mobile Phones, Palm Pilots etc.

3 Connection through commercial services e.g. MSN, Compuserve, AOL

SS

4 Browser HTML Syntax Checking SS

5 Browser compatibility HTML Checking SS

6 Client configuration e.g. unusable, local character sets being rejected by database etc.

SS

7 Client configuration: Client turns off graphics, rejects cookies, Cookies time out, Client doesn’t have required plug-ins etc.

SS

8 Minimum supported client platform to be determined/validated

SS

Component Functionality

9 Client component functionality SS

10 Client web-page object loading SS

11 Custom-built infrastructure component functionality

SS

12 COTS component functionality SS

13 HTML page content checking - spelling, HTML validation

SS

System/Application functionality

14 End-to-end system functionality CC

15 Loss of context/persistence between transactions

SS CC

Page 87: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 87

Using the worksheet - risks Failure Mode or Objective column

– failures/risks– requirements for demonstrations– mandatory/regulatory/imposed

requirements Probability of the problem occurring Consequence of failure Test Effectiveness - if we test, how

likely would the problem be detected?

Page 88: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 88

Using the worksheet - test stages

Proposed test stages and responsibilities can take shape

A failure mode might be addressed by one or more than one test stage

Let’s fill in a few rows and calculate risk numbers (19, 28, 38, 45).

Page 89: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 89

Creating the worksheet Create a template sheet with initial risks

and objectives based on experience/checklists

Cross-functional brainstorming– stakeholders or technically qualified

nominees– might take all day, but worth completing in

one session to retain momentum If you can’t get a meeting, use the specs,

then get individuals to review.

Page 90: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 90

Additional columns on the worksheet

COST! How can you prioritise the risks and

proposed test activities, without knowing the cost?

For each test activity, assign a cost estimate identifying all assumptions and dependencies

When the risks and testing is prioritised and cost is the limiting factor, you know– What testing can be afforded – IN SCOPE– What testing cannot be afforded – OUT OF

SCOPE.

Page 91: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 91

From worksheet to test process

Identify/analyse the risk Select a test activity to address the risks Collect test activities into stages and

sequence them Define, for each stage:

– objectives, object under test– entry, exit and acceptance criteria– responsibility, deliverables– environment, tools, techniques, methods.

Page 92: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 92

Test stage - key attributesTest Objectives • The objectives of this stage of testing, e.g. faults to

be found; risks to be avoided; demonstrations to beperformed.

Component(s) under Test • The architectural components, documents,business processes to be subjected to the test.

Baseline • Document(s) defining the requirements to be metfor the components under test (to predict expectedresults).

Responsibility • Groups responsible for e.g. preparing tests,executing tests and performing analysis of testresults.

Environment • Environment in which the test(s) will be performed.

Entry Criteria • Criteria that must be met before test execution maystart.

Exit Criteria • Criteria to be met for the test stage to end.

Techniques/tools • Special techniques, methods to be adopted; testharnesses, drivers or automated test tools to beused.

Deliverables • Inventory of deliverables from the test stage.

Page 93: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 93

Testing in the real world Time and cost limit what can be done Some risks may be deemed acceptable,

without testing Some risks will be de-scoped to squeeze

the plan into the available timescales or budget– mark de-scoped line items ‘out of scope’– if someone asks later what happened, you

have evidence that the risk was considered, but deemed acceptable.

Page 94: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 94

Stages, Teams and Environments

Page 95: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 95

So what is this real world, for this project?

Test E

xe

cutio

n

STAGES(LEVELS)

TEAMS ENVIRONMENTS

MASTERTESTPLAN

Risks

Failuremodes

Test objectives

AcceptanceTesting

Large-ScaleIntegrationTesting

SystemTesting

IntegrationTesting

UnitTesting

Developers

Sys TestTeam

LSI TestTeam

AcceptanceTest Team To-be-live

(or copyof live)

Partitioned

Personal / by pair

Integration

Page 96: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 96

Many focused stages, or bigger broader stages?

It depends: may be convenient to...– group test types together (each as early as

possible)– group by responsibilities (teams / managers)– group into available environments

Need to keep dependencies in right sequence

System development lifecycle has an influence:– agile methods suggest fewer, broader stages– proprietary methods may have own stage

names.

Page 97: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 97

Integration, and Large-Scale Integration

2. Web Sub-System

Web Server

1. Application (objects)Sub-System

Database Server

Banking System (Credit Card Processor)

Legacy System(s)

3. Order Processing Sub-System4. Full E-Business System

INTEGRATION

LARGE-SCALE INTEGRATION

Page 98: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 98

Organising the teams

Requirements Design & Development

Service Integration

Service Implementation

Programme Management

Testing Strategy & Co-ordination

Unit,Integration &SystemTesting

(Involved inAcceptanceTesting)

Large-ScaleIntegration Testing

Acceptance Testing& Pilot

Project (Release) Management

Page 99: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 99

Specifying and procuring the environments

Test (to-be-live)environmentat location B

Test environmentat location A

Live environment(at location B)

Development environment

(at location A)

Disaster Recoveryand test environment

(at location C)

Release 1 Unit& Integration Testing

Release 1 System Testing

Release 1 LSI Test & Acc Test

Release 1 Pilot & Live

Time

Release 2 LSI Test &Acc Test

Release 2 System Testing

Release 2 Live

move test interfaces

(etc)

Not yetused

connecttest interfaces

connect liveinterfaces

Release 3 LSI Test &Acc Test

Release 2 Unit& Integration Testing

Release 3 Unit& Integration Testing

Release 3 System Testing

(etc)

Not yet live

Now live

Not yet used

Page 100: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 100

Where do the non-functional tests fit? Some need a stable functional base,

e.g. performance, security, but can do some early work:– configuration benchmarking, infrastructure

& component performance– security inspections, eg source code,

network Usability testing can & should be done

early.

Page 101: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 101

System development life-cycles

Waterfall is still in use, but iterative more suited to risk-based testing:– iterative lifecycles are themselves risk-

reducers– iterations give good opportunity to re-

evaluate risks– incremental is special case of iterative; if

functionality added to stable core, retest risk low

But: if design is iterated, retest risk high, need much regression testing.

Page 102: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 102

Planning for entry to, exit from, and completion of, test stages

INTEGRATION

UNIT

SYSTEM

LARGE SCALE INTEGRATION

ACCEPTANCE

Release 1

Release 1

Every unit as soon as unit-tested Release 2 startsRetesting & regression testing of fixes

Sub-systems or periodic builds, plus “fix” builds

Retesting & regression testing of fixes

Retesting & regression testing of fixes

Retesting & regression testing of fixes

Deliveries of major functional areas

PILOT

Page 103: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 103

Estimation

Page 104: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 104

Estimation in a vacuum Need to estimate early “It’s impossible!”

– no requirements yet– don’t know the risks– don’t know how many faults there will be– don’t know how severe the faults will be– don’t know how long the developers will take

to fix– Etc. etc.

But you need some high level estimates to include in the plan.

Page 105: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 105

Problems with high-level estimates If unsubstantiated by lower level detail

– management add a dose of optimism– your estimates may be pruned – contingency removed if it sticks out too much

Estimates not properly justified will be challenged, or simply thrown out

Management target costs, timescales– based on pride– because of real or imagined pressure from above– they lose much of their reputation if dates are not

me Project may not be approved if estimates are

high.

Page 106: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 106

Estimation formula A

Assumption: testing takes as long as development

1. FUNCTESTBUDGET = cost of all technical design and documentation plus development (excluding any testing at all)

2. DEVTESTBUDGET = developers’ estimate of their unit and

integration testing

Page 107: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 107

Estimation formula A (cont’d)

3. INDEPTESTBUDGET = FUNCTESTBUDGET - DEVTESTBUDGET

document an important assumption: that your team will have sign-off authority on the developers’ unit and especially integration test plans. This may seem overbearing, but if testing is neglected at any stage, the price will be paid!

4. SYSTESTBUDGET =0.75xINDEPTESTBUDGET

5. USERTESTBUDGET =0.25xINDEPTESTBUDGET

6. NFTESTBUDGET = 0.25xFUNCTESTBUDGET

if stringent performance test required

7. Else NFTESTBUDGET = 0.15xFUNCTESTBUDGET

if no or superficial performance test required

Page 108: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 108

Estimation formula A (cont’d)

Total test budget for staff resources is therefore FUNCTESTBUDGET + 25% (or whatever uplift you choose for non-functional testing)

Add a figure for the cost of test tool licenses that you believe you will incur– Consult one or more vendors what to budget– If they exaggerate they risk losing the

business!– Tools can cost up to $100,000 or even

$150,000 Then add a cost for test environment(s).

Page 109: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 109

Estimation formula A (cont’d)

Ask technical architect to estimate the cost and effort required to build three independent functional test environments– Component testing– integration plus system testing (overlapped

for most of the functional testing period)– production-scale performance test

environment available for two months Add that to your total estimate.

Page 110: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 110

Estimation formula B (1-2-3 rule)

This formula works best for SYSTEM testing

If you have detailed requirements/designs, scan a few typical pages of the requirement– Estimate how many test conditions per page– Multiply the average conditions per page by

the number of requirements pages– C = total test conditions

Assume: tester can specify 50 conditions/day

Specification:– Effort to specify SYSTEM test = NDAYS = C /

50

Page 111: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 111

Estimation formula B (cont’d)

Preparation:– Effort = 2 x NDAYS

– If preparing test data and exp results is easy– Effort = 3 x NDAYS

– If preparing test data and exp results is harder– Effort = 4 x NDAYS

– If you need to prepare details scripts for audit purposes

Execution:– Effort = NDAYS (if it goes well (does it ever?))– Effort = 3 x NDAYS (if it goes badly (always?))

Regression test (execution):– Effort = NDAYS

Page 112: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 112

Estimation formula B (cont’d)

Total Effort for SYSTEM TEST is between– 6 NDAYS and 9 NDAYS

Allocate an additional budget of one-third of the system test budget to user acceptance ie– Between 2 NDAYS and 3 NDAYS

For nonfunctional testing and tools use formula B:

1-2-3 rule:– 1 day to specify tests (the test cases)– 2 days to prepare tests– 1-3 days to execute tests (3 if it goes badly)– Easy to remember, but you may have different

ratios.

Page 113: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 113

Exercise – Estimation formula A

Total budget for analysis, design and development = £100k

Developers say component/integration test will cost £20k

Calculate, using formula A– Budget for system and acceptance testing– Non-functional testing– Total cost of development and testing

Document your assumptions.

Page 114: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 114

Exercise – Estimation formula A (cont’d)

1. FUNCTESTBUDGET =2. DEVTESTBUDGET =3. INDEPTESTBUDGET =4. SYSTESTBUDGET =5. USERTESTBUDGET =6. NFTESTBUDGET =7. Total cost of dev + test =8. Assumptions?

Page 115: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 115

Exercise – Estimation formula B

A 150 page specification has 100 pages of requirements with an average of 25 conditions per page

Calculate using formula B estimates for:– Specification– Preparation (assume it’s easy to do)– Execution (assume the worst)– Execution of a complete regression test– User acceptance test.

Page 116: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 116

Planning and Scheduling

Page 117: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 117

Master Test Planned: now what’s next?

Master Test Plan has estimated, at a high level, resources & time cost

Next steps are:– more detailed and confident test plans

for each stage– bottom-up estimates to check against

top-down– preparing to manage test specification

& execution to time, cost, quality...

Page 118: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 118

Test Plans for each testing stage

Risk F1

Requirement F1

Risk F2

Risk F3

Risk F4

Risk U1

Risk U2

Risk U3

Risk S1

Risk S2

Requirement F2Requirement F3Requirement N1Requirement N2

eg for System Testing:

Generic test objective G4

Generic test objective G5

Test objective F1

Test objective F2

Test objective F3

Test objective F4

Test objective U1

Test objective U2

Test objective U3

Test objective S1

Test objective S2

Test 1 2 3 4 5 6 7 8 9 10 11 ...

Test importance H L M H H H M M M L L ...

125

10

60

32

12

25

30

100

40

Page 119: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 119

Plan to manage risk during test execution

RiskQuality

TimeTimeCost

Scope

Time

CostScope

Quality

ScopeCost

best pair tofine-tune

Quality

Page 120: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 120

Test Design: target execution schedule

TEAMSENVIRONMENTS TESTERS

1

5

6

32

7

9

10

Test execution days 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...

4 8

Retests &

regressio

n tests

11

Earliest completion date

Comfortable completion date

Partition for functional tests

Partition for disruptive non-functional tests

Balance &transactionreporting

End-to-endcustomerscenarios

Inter-accounttransfers

Payments

Direct debits

Page 121: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 121

Test Specificationand Risk Traceability

Page 122: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 122

What test specificationdocuments after Design?

}

Master Test Plan StageTestPlans Designs

Cases “Scripts” Data

Execution & managementprocedures

Contrived

Converted

“live”Schedule

(Test procedurespecifications)

Page 123: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 123

Context-driven approach affects test documentation

Product

Specif-ication

Expectedoutcome

Test

Testoutcome

Pass / fail

Impression of product

Require-ment

Require-ment

Epistem-ology

Product

Cognitivepsychology

“BEST PRACTICE” CONTEXT-DRIVEN

Observe &evaluate

Heuristics

Abductiveinference

Context

Bugadvocacy

Page 124: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 124

Variables in test documentation

Format of the documents: automated tool, spreadsheet, word processor, or even hand-written cards!

Test procedures: detailed content of tests if needed, and separate overall process

Expected outcomes: explicit or implicit?

Test data: synthetic and lifelike.

Page 125: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 125

How much test documentation is required?

Stake out two extremes, eg:– “best practice” documentation, reviewed– spreadsheet overviews, no review

Identify the requirements each “extreme pre-requisite” is really trying to address, eg:– anyone can execute; efficiency is paramount

Distil common objective, question assumptions, inject refinements; best fit might be a mixture of documentation styles tailored to your project– Sources: Software documentation

superstitions (Daich); Goldratt’s Theory of Constraints (Dettmer).

Page 126: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 126

Traceability of risks Many projects use risk analysis

already, but... Maintaining traceability of risks is a

challenge For risk-based reporting, need clerical

help or automation (more work needed on this)

Beware over-complex risk relationships

Tactical solution is usually spreadsheets.

Page 127: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 127

Part IVManaging Test Execution

and the End-Game

Page 128: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 128

Part IV - Agenda Progress and Incident Management Risk and Benefits-Based Test

Reporting Consensus and Risk-Based

Acceptance.

Page 129: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 129

Progress and Incident Management

Page 130: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 130

Risk reduction components Tests passed reduce perceived risk Components of risk reduction are:

– progress through tests– severities of incidents reported– progress through fault-fixing and retesting– first quantitative (have we done enough

testing?), then– qualitative (can we tolerate remaining specific

risks?).

Page 131: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 131

Incident classification and entry-exit criteria

Classifying incidents:– priority: how much testing it interrupts

(“urgency”)– severity: business impact if not fixed

(“importance”)– three levels of each may be enough (ten too

many) Entry and exit criteria:

– entry = preparations + adequate exit from previous stage(s)

– exit: build, delivery or release can proceed to next stage.

Page 132: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 132

Fail

Pass

Progress through tests We are interested in two main aspects:

– can we manage the test execution to get complete before target date?

– if not, can we do it for those tests of high (and medium?) importance?

High

MediumLow

Target tests run

Actual tests run

# tests

date

# tests

Target tests passed

date

Actual testspassed

Page 133: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 133

Progress through incident fixing and retesting

Similarly, two main aspects:– can we manage the workflow to get incidents

fixed and retested before target date?– if not, can we do it for those of material

impact?

Closed

Deferred

ResolvedAwaitingfix

# incidents

date

# incidents

date

Cumulative incidentsOutstandingmaterial impact

Page 134: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 134

Quantitative and qualitative risk reduction from tests and retests

PROGRESSTHROUGHINCIDENTFIXING& RETESTING

PROGRESSTHROUGHTESTS

PROGRESS &RESIDUAL RISKUP RIGHT SIDEOF W-MODEL

Large-Scale Integration Testing

SystemTesting

AcceptanceTesting

HM

LFail

Pass

Materialimpact

Awaiting fixResolved

DeferredClosed

Awaiting fix Materialimpact

Page 135: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 135

Risk and Benefits-BasedTest Reporting

Page 136: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 136

Risk-based reporting

Progress through the test plan

todayPlanned

end

residual risks of

releasing TODAY

Resi

dual R

isks

start

all risks ‘open’ at the start

Page 137: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 137

Benefits of risk-based test reporting

Risk of release is known:– On the day you start and throughout the test

phase– On the day before testing is squeezed

Progress through the test plan brings positive results – risks are checked off, benefits available

Pressure: to eliminate risks and for testers to provide evidence that risks are gone

We assume the system does not work until we have evidence – “guilty until proven innocent”

Reporting is in the language that management and stakeholders understand.

Page 138: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 138

Benefit & objectives based test reporting

Open

Closed

Ris

ks

Open

Open

Closed

Closed

Open

Obje

ctiv

e

Obje

ctiv

e

Obje

ctiv

e

Obje

ctiv

e

Benefit

Benefit

Benefit

Benefit

Benefit

Benefits available for release

Obje

ctiv

e

Benefit

Closed

Page 139: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 139

Benefits of benefit-based test reporting

Risk(s) that block every benefit are known:– On the day you start and throughout the test

phase– Before testing is squeezed

Progress through the test plan brings positive results – benefits are delivered

Pressure: to eliminate risks and for testers to provide evidence that benefits are delivered

We assume that the system has no benefits to deliver until we have evidence

Reporting is in the language that management and stakeholders understand.

Page 140: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 140

How good is our testing? Our testing is good if it provides:

– Evidence of the benefits delivered– Evidence of the CURRENT risk of

release– At an acceptable cost– In an acceptable timeframe

Good testing is:– Knowing the status of benefits with

confidence– Knowing the risk of release with

confidence.

Page 141: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 141

Consensus and Risk-Based Acceptance

Page 142: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 142

Stakeholders and acceptance hierarchy

Testers build consensus among differing viewpoints of stakeholders, but

Senior management will typically accept system, eg:

Senior managementcustomer-facing? internal-facing?

Project & Quality managementhopes fears

User Acceptance Operational Acceptance

tester-facilitated consensus

Page 143: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 143

Test Review Boards and degrees of approval

User Acceptance

Operational Acceptance

To ascend the approval and acceptance hierarchy, testers facilitate Testing Review Boards, eg:

Decisions could be:– unqualified– qualified– delay

Pilot start

Large-scale Integration

Page 144: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 144

Slippages and trade-offs: an example

If Test Review Boards recommend delay, management may demand a trade-off, “slip in a little of that descoped functionality”

This adds benefits but also new risks:

scope

date

actualgo-live

firstslip

originaltarget

Page 145: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 145

Tolerable risk-benefit balance: another example

Even if we resist temptation to trade off slippage against scope, may still need to renegotiate the tolerable level of risk balanced against benefits:

(risk -benefits)

date

actualgo-live

originaltarget

date“go for

it”margin

original target net risk

Page 146: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 146

Closing Comments

Page 147: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 147

Risk-based test approach: planning

RBT approach helps stakeholders:– They get more involved and buy-in– The have better visibility of the test process

RBT approach helps testers– Clear guidance on the focus of testing– Approval to test against risks in scope– Approval to not test against risks out of

scope– Clearer test objectives upon which to design

tests.

Page 148: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 148

Risk-based test approach: execution and reporting

RBT approach helps stakeholders:– They have better visibility of the

benefits available and the risks that block benefits

RBT approach helps management:– To see progress in terms of risks

addressed and benefits that are available for delivery

– To manage the risks that block acceptance

– To better make the release decision.

Page 149: Risk-Based Testing - Designing & managing the test process (2002)

Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 149

Risk-Based TestingClose

Any Questions?

Document templates can be found at

www.riskbasedtesting.com