ics 52: introduction to software engineering

41
Topic 11 Summer 2003 1 ICS 52: Introduction to Software Engineering Lecture Notes for Summer Quarter, 2003 Michele Rousseau Topic 11 Partially based on lecture notes written by Sommerville, Frost, Van Der Hoek, Taylor & Tonne. Duplication of course material for any commercial purpose without the written permission of the lecturers is prohibited

Upload: jennifer-doyle

Post on 01-Jan-2016

32 views

Category:

Documents


0 download

DESCRIPTION

ICS 52: Introduction to Software Engineering. Lecture Notes for Summer Quarter, 2003 Michele Rousseau Topic 11 - PowerPoint PPT Presentation

TRANSCRIPT

Topic 11 Summer 2003 1

ICS 52: Introduction to Software Engineering

Lecture Notes for Summer Quarter, 2003

Michele RousseauTopic 11

Partially based on lecture notes written by Sommerville, Frost, Van Der Hoek, Taylor & Tonne. Duplication of course material for any commercial purpose

without the written permission of the lecturers is prohibited

Topic 11 Summer 2003 2

Today’s Lecture

Quality assurance An introduction to testing

Topic 11 Summer 2003 3

ICS 52 Life Cycle

RequirementsphaseVerify

DesignphaseVerify

ImplementationphaseTest

TestingphaseVerify

Topic 11 Summer 2003 4

Implementation/Testing Interaction

Implementation(previous lecture)

Testing(this lecture)

Topic 11 Summer 2003 5

The Seriousness of the problem…

Mars Pathfinder – Metric or English system Audi 5000 – auto accelerate – feature or

fault? Mariner 1 launch – veered off course AT&T telephone network - down for 9

hours Ariane 5 Pentium – FPU error X-ray machine – over-radiation LAS

Topic 11 Summer 2003 6

Impact of Failures

Not just “out there”• Mars Pathfinder• Mariner 1• Ariane 5

But also “at home”• Your car• Your call to your mom• Your homework• Your hospital visit

Peter Neumann’s Risks Forum: http://catless.ncl.ac.uk/Risks

Topic 11 Summer 2003 7

Quality Assurance

What qualities do we want to assure? Correctness (most important?) How to assure correctness?

• By running tests• How else?

Can qualities other than correctness be “assured” ? How is testing done? When is testing done? Who tests? What are the problems?

Topic 11 Summer 2003 8

Software Qualities

Correctness Reliability Robustness Performance Usability Verifiability Maintainability Repairability Safety

Evolvability Reusability Portability Survivability Understandability

We want to show relevant qualities exist

Topic 11 Summer 2003 9

Quality Assurance

Assure that each of the software qualities is met• Goals set in requirements specification• Goals realized in implementation

Sometimes easy, sometimes difficult• Portability versus safety

Sometimes immediate, sometimes delayed• Understandability versus evolvability

Sometimes provable, sometimes doubtful• Size versus correctness

Topic 11 Summer 2003 10

Verification and Validation

Verification“Are we building the product right?” (Boehm)• The Software should conform to its specification• testing, reviews, walk-throughs, inspections• internal consistency; consistency with previous step

Validation“Are we building the right product?”• The software should do what the user really requires

• ascertaining software meets customer’s intent Correctness has no meaning

independent of specifications

Topic 11 Summer 2003 11

Problem #1:Eliciting the Customer’s Intent

Actual Specs “Correct” Specs

Real needs

No matter how sophisticated the QA process is,there is still the problem of creating the initial specification

Topic 11 Summer 2003 12

Problem #2: QA is tough

Complex data communications• Electronic fund transfer

Distributed processing• Web search engine

Stringent performance objectives• Air traffic control system

Complex processing• Medical diagnosis system

Sometimes, the software system is extremelycomplicated making it tremendously difficult to

perform QA

Topic 11 Summer 2003 13

Problem #3: Management Aspects of QA

Who does what part of the testing?• QA (Quality Assurance) team?• Are developers involved?• How independent is the independent testing group?

What happens when bugs are found? What is the reward structure?

Project Management

Development GroupQA Group

?

?

Topic 11 Summer 2003 14

Problem #4: QA vs Developers

Quality assurance lays out the rules• You will check in your code every day• You will comment your code• You will…

Quality assurance also uncovers the faults• Taps developers on their fingers• Creates image of “competition”

Quality assurance is viewed as cumbersome• “Just let me code”

What about rewards?Quality assurance has a negative

connotation

Topic 11 Summer 2003 15

Problem #5: Can’t test exhaustively

There are 1014 possible paths! If we execute one test per millisecond, it would take 3.170 years to test this program!! Out of question

loop < 20x

Topic 11 Summer 2003 16

Simple Example: A 32-Bit Multiplier

Input: 2 32-bit integers Output: the 64-bit product of the inputs Testing hardware: checks one billion products

per second (or roughly one check per 2-30 seconds)

How long to check all possible products?264 2-30 = 234 seconds 512 years

What if the implementation is based on table lookups?

How would you know that the spec is correct?

Topic 11 Summer 2003 17

An Idealized View of QA

Design, in formal notation

executable machine code

Execution on verified hardware

Code, in verifiable language

Complete formal specsof problem to be solved

Correctness-preserving transformation

Correctness-preserving transformation

Correctness-preserving transformation

Correctness-preserving transformation

Topic 11 Summer 2003 18

A Realistic View of QA

Design, in mixed notation

Pentium machine code

Execution on commercial hardware

Code, in C++, Ada, Java, …

Mixture of formal andinformal specifications

Manual transformation

Manual transformation

Compilation by commercial compiler

Commercial firmware

Topic 11 Summer 2003 19

Is a whole life-cycle process - V & V must be applied at each stage in the software process.

Has two principal objectives• The discovery of defects in a system• The assessment of whether or not the

system is usable in an operational situation.

The V & V process

Topic 11 Summer 2003 20

Software inspections Concerned with analysis of the static system representation to discover problems (static verification)• May be supplement by tool-based document

and code analysis Software testing Concerned with exercising

and observing product behaviour (dynamic verification)• The system is executed with test data and its

operational behaviour is observed

Static and dynamic verification

Topic 11 Summer 2003 21

Static and dynamic V&V

Formalspecification

High-leveldesign

Requirementsspecification

Detaileddesign

Program

PrototypeDynamicvalidation

Staticverification

Topic 11 Summer 2003 22

V & V confidence

Depends on system’s purpose, user expectations and marketing environment• Software function

» The level of confidence depends on how critical the software is to an organisation

• User expectations» Users may have low expectations of certain

kinds of software

• Marketing environment» Getting a product to market early may be more

important than finding defects in the program

Topic 11 Summer 2003 23

Careful Planning is essential Start Early – remember the V model

• Perpetual Testing Balance static verification and testing Define standards for the testing

process rather than describing product tests

V & V planning

Topic 11 Summer 2003 24

Static Analysis

Software Inspection Examine the source representation

with the aim of discovering anomalies and defects

May be used before implementation May be applied to any representation

of the system (requirements, design, test data, etc.)

Very effective technique for discovering errors

Topic 11 Summer 2003 25

Inspection success

Many different defects may be discovered in a single inspection.

In testing, one defect ,may mask another so several executions are required

They reuse domain and programming knowledge so reviewers are likely to have seen the types of error that commonly arise

Topic 11 Summer 2003 26

Inspections and testing

Inspections and testing are complementary and not opposing verification techniques

Both should be used during the V & V process

Inspections can check conformance with a specification • Can’t check conformance with the customer’s real

requirements• Cannot validate dynamic behaviour

Inspections cannot check non-functional characteristics such as performance, usability, etc.

Topic 11 Summer 2003 27

Inspections and testing

Inspections and testing are complementary and not opposing verification techniques

Both should be used during the V & V process

Inspections can check conformance with a specification but not conformance with the customer’s real requirements

Inspections cannot check non-functional characteristics such as performance, usability, etc.

Topic 11 Summer 2003 28

Testing

The only validation technique for non-functional requirements

Should be used in conjunction with static verification to provide full V&V coverage

“Program testing can be used to show the presence of bugs, but never to show their absence.”

— E. W. Dijkstra

Topic 11 Summer 2003 29

What is Testing

Exercising a module, collection of modules, or system• Use predetermined inputs (“test case”)• Capture actual outputs• Compare actual outputs to expected outputs

Actual outputs equal to expected outputs test case succeeds

Actual outputs unequal to expected outputs test case fails

Topic 11 Summer 2003 30

Limits of software testing

“Good” testing will find bugs “Good” testing is based on requirements,

i.e. testing tries to find differences between the expected and the observed behavior of systems or their components

V&Vshould establish confidence that the software is fit for purpose

BUT remember: Testing can only prove the presence of bugs - never their absence – can’t prove it is defect free

Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed

Topic 11 Summer 2003 31

Testing Terminology

Failure: Incorrect or unexpected output, based on specifications• Symptom of a fault

Fault: Invalid execution state• Symptom of an error• May or may not produce a failure

Error: Defect or anomaly or “bug” in source code• May or may not produce a fault

Topic 11 Summer 2003 32

Defect testing & debugging –Different processes

V&V -establishes existence of defects in a program

Debugging – locate and repair

Testing and debugging

Topic 11 Summer 2003 33

The debugging process

Locateerror

Designerror repair

Repairerror

Re-testprogram

Testresults Specification Test

cases

Topic 11 Summer 2003 34

Testing Goals

Reveal failures/faults/errors Locate failures/faults/errors Show system correctness Improve confidence that the system

performs as specified (verification) Improve confidence that the system

performs as desired (validation) Desired Qualities:

• Accurate• Complete / thorough• Repeatable• Systematic

Topic 11 Summer 2003 35

Test Tasks

Devise test cases• Target specific areas of the system• Create specific inputs• Create expected outputs

Choose test cases• Not all need to be run all the time

» Regression testing

Run test cases• Can be labor intensive

All in a systematic, repeatable, and accurate manner

Topic 11 Summer 2003 36

Levels of Testing

Unit/component testing: testing of code unit (subprogram, class, method/function, small subsystem)• Often requires use of test drivers

Integration testing: testing of interfaces between units• Incremental or “big bang” approach?• Often requires drivers and stubs

System or acceptance testing: testing complete system for satisfaction of requirements• often performed by user / customer

Topic 11 Summer 2003 37

What is the problem we need to address?

Want to verify software --> Need to test --> Need to decide on test cases --> But, no set of test cases guarantees absence of bugs,

What is a systematic approach to the selection of test cases that will lead to the accurate, acceptably thorough, and repeatable identification of errors, faults, and failures?

So,

Topic 11 Summer 2003 38

Two Approaches

White box (or Glass Box) testing• Structural testing• Test cases designed, selected, and ran based on

structure of the code• Scale: tests the nitty-gritty• Drawbacks: need access to source code

Black box testing• Specification-based testing• Test cases designed, selected, and ran based on

specifications• Scale: tests the overall system behavior• Drawback: less systematic

Topic 11 Summer 2003 39

Test Oracles

Provide a mechanism for deciding whether a test case execution succeeds or fails

Critical to testing• Used in white box testing• Used in black box testing

Difficult to automate• Typically relies on humans• Typically relies on human intuition• Formal specifications may help

Topic 11 Summer 2003 40

Example

Your test shows cos(0.5) = 0.8775825619

You have to decide whether this answer is correct?

You need an oracle• Draw a triangle and measure the sides• Look up cosine of 0.5 in a book• Compute the value using Taylor series

expansion• Check the answer with your desk calculator

Topic 11 Summer 2003 41

Use the Principles

Rigor and formality Separation of concerns

• Modularity• Abstraction

Anticipation of change Generality Incrementality