comp 354 software engineering i section bb summer 2009 dr greg butler...
TRANSCRIPT
COMP 354Software Engineering I
Section BB Summer 2009
Dr Greg Butlerhttp://users.encs.concordia.ca/~gregb/home/comp354-summer2009.html
Validation and Verification - What must be verified?
Error, Fault, Failure
Correct, Robust, Reliable
Testing - Aim of testing
Testing - test plan, test suite, test case, test data
Test infrastructure
Reviews, walkthroughs, inspections
Validation and Verification
Validation: Are we building the right product?
Validation is a process by which one establishes that a deliverable satisfies the needs of the users.
Verification: Are we building the product right?
Verification is a process by which one establishes that a deliverable satisfies another deliverable.
What must be verified?
EveryEvery quality of the product (and deliverables)
correctness
performance
reliability
robustness
portability
maintainability
user friendliness
...
Results may not be Yes/No : eg percentage of tests passed
Results may be subjective : eg how portable is a system
Error, fault, failureError is mistake made by person when reasoning
Fault is mistake in code (or deliverable) due to error
Failure is how fault in code manifests itself when the system is executed
Testing shows failures.
Debugging tries to locate the fault (and fix it).
Correct, robust, reliableCorrectnessCorrectness: Does the system always give the output
described in the requirements when given an input described in the requirements?
Every right input produces expected output
RobustnessRobustness: Does the system behave reasonably when given an incorrect (unexpected) input?
ReliabilityReliability: How often the system produces a correct output when given a correct input?
Probability of correct behaviour
Needs to know distribution of typical usage of system
Testing
Testing is verifying the quality of correctness
Aim of testing
To discover the presence of errors!!!
Testing cannot prove correctness
Except for very simple systems where you can test every possible input
Test data, test case, test suite
test data: a set of inputs devised to exercise a test
can be automatically generated sometimes (see structural testing)
test case consists of
the purpose of the test
in terms of the system requirements it exercises
an input specification (ie test data)
a specification of the expected output
test suite is a collection of test cases
Test Planmajor components of a complete test plan are:
• description of the major phases of testing (eg unit testing, module testing, ... )
• objectives (acceptance criteria) for the testing process
• overall testing schedule and resource allocation (when, who, time and machine resources) The test schedule should allow for slippage.
• description of the relationship between the test plan and other project plans (eg implementation schedule)
• description of how traceability of test results to system requirements is to be demonstrated
• description of how tests results are recorded (It must be possible to audit the testing process to guarantee that tests have been carried out on latest versions of the software)
• description of how the test cases were designed, and how the test data was generated
• description of all the test cases, including all test data
Stages of TestingUnit Testing: testing an individual unit or basic component of the system
eg testing a function SQRT
Module Testing: testing a module consisting of several units, to check that their interaction and interfaces are ok
eg testing that pop and push in a stack provide a last-in-first-out behaviour
Subsystem Testing: testing a subsystem consisting of several modules, to check module interaction and module interfaces are ok
eg an output subsystem writing to a database, and keeping an audit trail, and providing rollback services
Integration Testing: testing the entire system once all the subsystems have been integrated
Acceptance Testing: testing with real data for the customer to satisfy the customer that the system meets the requirements
Stages of Testing
Remember the V-Model
Testing Whole System
Stress (Overload) Testing: check the capability of the system to perform under 'overloaded' conditions
eg full tables, memory, filesystem
eg more simultaneous users than expected
Regression Testing: testing old features againagain when new features are added or changes are made to make sure you fully understood the changes to be made and did not impact parts of the system supposedly isolated from the changes
regressionregression = degradation in level of correctness (of old features)
Code Walkthrough
Code walkthroughCode walkthrough is informal analysis based on "playing the computer" and "walking through" the code
The states of the "computation" are recorded on a piece of paper or a blackboard.
Code walkthrough is cooperative organized activity with several participants
Aims of code walkthrough:
• discovery of errors - notnot fixing errors
• stylistic review of code
• compliance to coding standards
• educating junior programmers
Code Inspection
Code inspectionCode inspection is like a walkthrough, but difference in goals.
Aims of code inspection:
• discovery of commonly made errors
Given a list of commonly made errors, look at code to see if any of them occur
Code inspection is Code inspection is not a hand execution of the code. a hand execution of the code.
Process for Code InspectionPreparation Preparation
rrequiresequires
• a precise specification of the code to be inspected must be availablea precise specification of the code to be inspected must be available
• members of the inspection team must be familiar with all relevant standards of the members of the inspection team must be familiar with all relevant standards of the organizationorganization
• code: up-to-date and free of syntax errorscode: up-to-date and free of syntax errors
• checklist of likely errorschecklist of likely errors
Plan/schedule : people on team, code to be inspectedPlan/schedule : people on team, code to be inspected
Overview of code is presentedOverview of code is presented
Each individual team member reads code, notes errorsEach individual team member reads code, notes errors
PeoplePeople
• Small number of peers, not management-types
Meeting Meeting
• short --- no more than two hours
• well-defined, single focus on a section or small number of related sections of code (no more than 180-250 lines of code in total)
• aim: discovery of errors --- do not stray from this
Sample Checklist of Common Errors
use of uninitialized variablesuse of uninitialized variables
have all constants been namedhave all constants been named
for each conditional statement, for each conditional statement,
• is the condition correctis the condition correct
• jumps into loopsjumps into loops
• incompatible types in an assignmentincompatible types in an assignment
• nonterminating loopsnonterminating loops
for each array, for each array,
• should the lower bound be 0, 1, or should the lower bound be 0, 1, or something elsesomething else
• should the upper bound be Size or Size-1should the upper bound be Size or Size-1
• array indexes out of boundsarray indexes out of bounds
are delimiters '\0' explicitly assigned for are delimiters '\0' explicitly assigned for character stringscharacter strings
improper storage allocation/deallocationimproper storage allocation/deallocation
actual-formal parameter mismatches in procedure actual-formal parameter mismatches in procedure callscalls
• correct number of parameterscorrect number of parameters
• matching types of each formal-actual matching types of each formal-actual parameter pairparameter pair
comparison for equality of floating point valuescomparison for equality of floating point values
are compound statements correctly brackettedare compound statements correctly bracketted
if links/pointers are used, are link assignments correct when updating data structures using links
have all possible error conditions been taken into account
Structural Testing (White-Box)
Structural testingStructural testing is based on structurestructure of controlcontrol flow-graphflow-graph of the code.
Test cases are chosen to cover all possibilities of ...
statement coveragestatement coverage: select a set T of test cases such that, by executing the program P for each test case, each elementary statementeach elementary statement of P is executed at least onceat least once.
edge coverageedge coverage: select a set T of test cases such that, by executing P for each test case, each edgeeach edge of the control flow graph of P is traversed at least at least onceonce.
condition coveragecondition coverage: select a set T of test cases such that, by executing P for
each test case, each edgeeach edge of the control flow graph of P is traversed andand all all possible values of the constituents of compound conditionspossible values of the constituents of compound conditions are exercised at at least onceleast once.
path coveragepath coverage: select a set T of test cases such that, by executing P for each test case, each patheach path from the initial to the final node of the control flow graph of P is traversed.
Functional Testing (Black-Box)
Functional Testing is based on our view of the program as a function from inputs to outputs as described in the requirements document.
No need to look at code which implements the program.
Test cases designed by equivalence partitioningequivalence partitioning of input to identify typicaltypical and boundaryboundary cases.
Equivalence Partitioning for Test Cases
Equivalence partitioning: partition the domain of inputs into disjoint sets such that inputs in the same set exhibit
similar/identical/equivalent
properties with respect to the test being performed.
EExamplexample: function which takes a 5-digit number as input
set 1 = { integers < 10000 }
set 2 = { integers in range 10000 .. 99999 }
set 3 = { integers > 99999 }
EExamplexample: function which takes a linked list as input
set 1 = { empty list }
set 2 = { lists of length 1 }
first node = last node problems
set 3 = { lists of moderate length }
set 4 = { very long lists }
might be space problems