6. oose testing
TRANSCRIPT
Object-Oriented Software Engineering
Chapter 6
Object Oriented Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
An Overview of Testing• Testing is the process of finding differences between the expected
behavior specified by system models and the observed behavior ofthe system
• The goal of testing is to design tests that detects defects in thesystem and to reveal problems.
• To test a system effectively,– a tester must have a detailed understanding of the whole system,
ranging from the requirements to system design decisions andimplementation issues.
• A tester must also be knowledgeable of testing techniques andapply these techniques effectively and efficiently to meet time,budget, and quality constraints.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
An Overview of Testing• Testing is
– a process of demonstrating that errors are notpresent”
– a systematic attempt to find errors in a plannedway
• Software Testing answers the question
“Does the s/w behave as specified…?”
Testing is a process used to identify the correctness,completeness and quality of developed computersoftware.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
• One of the practical methods commonly used to detect the presenceof errors (failures) in a computer program is to test it for a set ofinputs.
Our programThe output is correct?I1, I2, I3,
…, In, …
Expected results = ?
Obtained results
“Inputs”
Testing………….?Testing………….?
Developer Independent Tester
Understands thesystem,
tests "gently”, Test is driven by
“delivery”.
learns about the system, attempts to break the
system Test is driven by
“quality”.
Who Tests the Software….?Who Tests the Software….?
Errors
Requirements conformance
Performance
An indication of quality
What Testing Shows…?What Testing Shows…?
7
• An early start to testing reduces the cost, time to rework anderror free software that is delivered to the client.
• However in Software Development Life Cycle (SDLC) testing can bestarted from the Requirements Gathering phase and lasts till thedeployment of the software.
• However it also depends on the development model that is beingused.
• For example in Water fall model formal testing is conducted in theTesting phase,
• But in incremental model, testing is performed at the end of everyincrement/iteration and at the end the whole application is tested.
When to start Testing …?When to start Testing …?
8
• Unlike when to start testing it is difficult to determine when tostop testing, as testing is a never ending process and no one cansay that any software is 100% tested.
• Following are the aspects which should be considered to stopthe testing:
• Testing Deadlines.• Management decision
• Completion of test case execution.• Completion of Functional and code coverage to a certain point.
• Bug rate falls below a certain level
When to stop Testing …?When to stop Testing …?
An Overview of Testing
Unit testing finds differences between the object design model andits corresponding component.
Structural testing finds differences between the system designmodel and a subset of integrated subsystems.
Functional testing finds differences between the use case modeland the system.
Finally, performance testing finds differences betweennonfunctional requirements and actual system performance.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Software Reliability• Software reliability is the probability that a software system will
not cause the failure of the system for a specified time underspecified conditions [IEEE Std. 982-1989].
• The goal of testing is to maximize the number of discovered faults,and increase the reliability of the system.
• The three classes of techniques for avoiding faults, detecting faults,and tolerating faults are1. Fault Avoidance Techniques2. Fault Detection Techniques3. Fault Tolerance Techniques
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Quality control Techniques1. Fault Avoidance TechniquesFault avoidance tries to prevent the occurrence of errors and failures byfinding faults in the system before it is released.
• Fault avoidance techniques include:
1. • development methodologies2. • configuration management3. • verification techniques4. • reviews of the system models, in particular the code model.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques
Development methodologies avoid faults by providing techniquesthat minimize fault introduction in the system models and code.Such techniques include
the unambiguous representation of requirements,
the use of data abstraction and data encapsulation,
minimizing of the coupling between subsystems
maximizing of subsystem coherence,
the early definition of subsystem interfaces, and
the capture of rationale information for maintenanceactivities.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance TechniquesConfiguration management avoids faults caused by undisciplinedchange in the system models.• it is a common mistake to a change a subsystem interface without
notifying all developers of calling components.• Configuration management can make sure that, if analysis models
and code are becoming inconsistent with one another, analysts andimplementer are notified.
Verification attempts to find faults before any execution of the system.• possible for small operating system kernel• It has limits.• Difficult to verify quality of large complex systems.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques• A review is the manual inspection of parts or all aspects of the
system without actually executing the system.• There are two types of reviews:
– walkthrough– inspection.
• Walkthrough, the developer informally presents the API, the code,and associated documentation of the component to the reviewteam.
• The review team makes comments on the mapping of the analysisand object design to the code using use cases and scenarios from theanalysis phase.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques• An inspection is similar to a walkthrough, but the presentation of the
unit is formal.• In fact, in a code inspection, the developer is not allowed to present
the artifacts (models, code, and documentation).• This is done by the review team, which is responsible for checking
the interface and code of the component against the requirements.• It also checks the algorithms for efficiency with respect to the
nonfunctional requirements.• Finally, it checks comments about the code and compares them with
the code itself to find inaccurate and incomplete comments.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault detection techniques• Fault detection techniques, such as debugging and testing, are used
during the development process to identify errors and find theunderlying faults before releasing the system.
• Debugging assumes that– faults can be found by starting from an unplanned failure.– developer moves the system through a succession of states,
ultimately arriving at and identifying the erroneous state.There are two types of debugging:• The goal of correctness debugging is to find any deviation between
the observed and specified functional requirements.• Performance debugging addresses the deviation between observed
and specified nonfunctional requirements, such as response time.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault detection techniques• Testing
– fault detection technique that tries to create failures or errors ina planned way.
– allows the developer to detect failures in the system before it isreleased to the customer.
• Unit testing tries to find faults in participating objects and/orsubsystems with respect to the use cases from the use case model
• Integration testing is the activity of finding faults when testing theindividually tested components together, for example, subsystemsdescribed in the subsystem decomposition, while executing the usecases and scenarios from the RAD.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault detection techniques• System testing tests all the components together, seen as a single
system to identify errors with respect to the scenarios from theproblem statement and the requirements and design goals identifiedin the analysis and system design, respectively:
• Functional testing tests the requirements from the RAD and, ifavailable, from the user manual.
• Performance testing checks the nonfunctional requirements andadditional design goals from the SDD. Note that functional andperformance testing are both done by developers.
• Acceptance testing and installation testing check the requirementsagainst the project agreement and should be done by the client, ifnecessary with support by the developers.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
21
Levels of Testing
Information needed at different Levels of Testing
22
23
Logical View
24
• A unit is the smallest testable part of software.• In procedural programming a unit may be an individual
program, function, procedure, etc. In OOP, the smallest unit is amethod.– Unit testing is often neglected but it is, in fact, the most
important level of testing.
Unit Testing
25
METHOD
• Unit Testing is performed by usingthe method White Box Testing
When is it performed?• First level of testing & performed
prior to Integration Testing
Who performs it?• performed by software developers
themselves or their peers.
Unit Testing
26
Integration Testing is a level of the software testing processwhere individual units are combined and tested as a group.Integration testing tests interface between components,interaction to different parts of system.
Integration Testing
27
Big Bang is an approach toIntegration Testing where all or mostof the units are combined togetherand tested at one go.
This approach is taken when thetesting team receives the entiresoftware in a bundle.
Top Down is an approach toIntegration Testing where top levelunits are tested first and lower levelunits are tested step by step afterthat.This approach is taken when topdown development approach isfollowed.
Integration Testing Approaches
28
Bottom Up is an approach toIntegration Testing wherebottom level units are testedfirst and upper level unitsstep by step after that. Thisapproach is taken whenbottom up developmentapproach is followed.
Sandwich/Hybrid is an approachto Integration Testing whichis a combination of TopDown and Bottom Upapproaches.
Integration Testing Approaches
System Testing
29
30
Functional Testing: Goal: Test functionality of system Test cases are designed from the requirements analysis
document (better: user manual) and centered aroundrequirements and key functions (use cases).The system istreated as black box
Unit test cases can be reused, but new test cases have to bedeveloped as well.
Performance Testing: Goal: Try to violate non-functional requirements Test how the system behaves when overloaded. Try unusual orders of execution Check the system’s response to large volumes of data What is the amount of time spent in different use cases?
System Testing
Types of Performance Testing• Stress Testing
– Stress limits of system• Volume testing
– Test what happens if large amounts of data are handled
• Configuration testing– Test the various software and
hardware configurations • Compatibility test
– Test backward compatibility with existing systems
• Timing testing– Evaluate response times and
time to perform a function
• Security testing– Try to violate security
requirements• Environmental test
– Test tolerances for heat, humidity, motion
• Quality testing– Test reliability, maintain-
ability & availability • Recovery testing
– Test system’s response to presence of errors or loss of data
• Human factors testing– Test with end users.
Acceptance Testing
Goal: Demonstratesystem is ready foroperational use Choice of tests is
made by client Many tests can be
taken from integrationtesting
Acceptance test isperformed by theclient, not by thedeveloper.
Alpha test:Client uses the software at
the developer’s environment.Software used in a controlled
setting, with the developeralways ready to fix bugs.
Beta test:Conducted at client’s
environment (developer is notpresent)
Software gets a realisticworkout in target environment
Fault tolerance techniques
Fault tolerance techniques assume that a system can be releasedwith errors and that system failures can be dealt with byrecovering from them at run time.
• Fault tolerance is the recovery from failure while the system isexecuting.
• A component is a part of the system that can be isolated fortesting. A component can be an object, a group of objects, orone or more subsystems.
• A fault, also called bug or defect, is a design or coding mistakethat may cause abnormal component behavior.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Testing Concepts• An error is a manifestation of a fault during the execution of the
system.• A failure is a deviation between the specification of a component and
its behavior. A failure is triggered by one or more errors.
• A test case is a set of inputs and expected results that exercises acomponent with the purpose of causing failures and detecting faults.
• A test stub is a partial implementation of components on which thetested component depends.
• A test driver is a partial implementation of a component thatdepends on the tested component. Test stubs and drivers enablecomponents to be isolated from the rest of the system for testing.
• •A correction is a change to a component. The purpose of acorrection is to repair a fault.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Model elements used during Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
36
ErrorsAn error is a mistake, misconception, or misunderstanding on the part
of a software developer.Faults /Defects /Bugs• A fault (defect) is introduced into the software as the result of an
error.• It is an anomaly in the software that may cause it to behave
incorrectly, and not according to its specification.Failures• A failure is the inability of a software system or component to
perform its required functions within specified performance requirements
A fault in the code does not always produce a failure.
Faults Failures & ErrorsFaults Failures & Errors
37
LOC Code
1 program double ();
2 var x,y: integer;
3 begin
4 read(x);
5 y := x * x;
6 write(y)
7 end
Fault: The fault that causes the failure is in line 5. The * operator is used instead of +.
Error: The error that conduces to this fault may be:
• a typing error (the developer has written * instead of +) • a conceptual error (e.g., the developer doesn't know how to double a number)
Failure: x = 3 means y =9 Failure!• This is a failure of the system since the correct output would be 6
Fault - Failure - Error - illustrationsFault - Failure - Error - illustrations
Fault error failure
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
An Error due to mechanical cause or earth quake etc
Example of Fault
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Test casesA test case is a set of input data and expected results thatexercises a component with the purpose of causing failures anddetecting faults.A test case has five attributes:1. Name2. Location3. Input4. oracle, and5. log (Table 9-1).
The name of the test case allows the tester to distinguish betweendifferent test cases.
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Test Cases: A test model where TestA must precede TestB and TestC.,TestA consists of TestA1 and TestA2, meaning that once TestA1andTestA2 are tested, TestA is tested; there is not separate test for TestA.A good test model has as few associations as possible, because teststhat are not associated with each other can be executed independentlyfrom each other. This allows a tester to speed up testing.
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Test Cases: Depending on which aspect of the system model istested Test cases are classified into black box tests and white box tests,.
Black box tests focus on the input/output behavior of the component. do not deal with the internal aspects of the component nor
with the behavior or the structure of the components.White box tests focus on the internal structure of the component. makes sure that, independently from the particular
input/output behavior
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Correction is a change to a component whose purpose is to repair a
fault. can range from a simple modification to a single
component to a complete redesign of a data structure or asubsystem
Several techniques can be used to handle such faults:a. Problem tracking includes the documentation of each failure,
error, and fault detected, its correction, and the revisions ofthe components involved in the change.
b. Regression testing includes the re-execution of all prior tests after achange. Regression testing is important in object-oriented methods.Regression testing is costly,
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
c. Rationale maintenance includes the documentation of therationale of the change and its relationship with the rationale ofthe revised component.Rationale maintenance enables developers to avoid introducingnew faults by inspecting the assumptions that were used to buildthe component.
Documenting Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Testing activities are documented in four types of documents,1. the Test Plan,2. the Test Case Specifications,3. the Test Incident Reports, and4. the Test Summary Report
Test Plan. The Test Plan focuses on the managerial aspects of testing. Itdocuments the scope, approach, resources, and schedule of testingactivities. The requirements and the components to be tested areidentified in this document.
The Test Plan (TP) and the Test Case Specifications (TCP) are written early in the process, as soon as the test planning and each test case are completed. These documents are under configuration management and updated as the system models change.
Documenting Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Student Reading Assignment&
Reference
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
48
1. ‘‘If an input condition for the software-under-test is specified as arange of values, select one valid equivalence class that covers theallowed range and two invalid equivalence classes, one outside eachend of the range.’’
For example, suppose the specification for a module says that aninput, the length of a Wire in millimetres, lies in the range 1–499;then select
One valid equivalence class that includes all values from 1 to499.
Two invalid classes That consists of all values less than 1, and That consists of all values greater than 499.
Equivalence Partitioning
49
2. ‘‘If an input condition for the software-under-test is specified as anumber of values, then select one valid equivalence class that includesthe allowed number of values and two invalid equivalence classes thatare outside each end of the allowed number.’’
For example, if the specification for a real estate-related module says thata house can have one to four owners, then we select
– One valid equivalence class that includes all the validnumber of owners
– Two invalid equivalence classes• Less than one owner and• More than four owners.
Equivalence Partitioning
50
3. ‘‘If an input condition for the software-under-test is specified as a setof valid input values, then select one valid equivalence class thatcontains all the members of the set and one invalid equivalence classfor any value outside the set.’’
For example, if the specification for a paint module states that thecolours RED, BLUE, GREEN and YELLOW are allowed as inputs, thenselect
– One valid equivalence class that includes the set RED,BLUE, GREEN and YELLOW, and
– One invalid equivalence class for all other inputs.
Equivalence Partitioning
51
4. ‘‘If an input condition for the software-under-test is specified as a “mustbe” condition, select one valid equivalence class to represent the “mustbe” condition and one invalid class that does not include the “must be”condition.’’
For example, if the specification for a module states that the firstcharacter of a part identifier must be a letter, then select
• One valid equivalence class where the first character is a letterand
• One invalid class where the first character is not a letter.
Equivalence Partitioning
52
Write Test Cases using Equivalence Partitioning for a requirement that isstated as follows:
“In the examination grading system, if the student scores 0 to less than 40then assign E Grade, if the student scores between 40 to 49 then assignD Grade, if the student scores between 50 to 69 then assign C Grade, ifthe student scores between 70 to 84 then assign B Grade, and if thestudent scores 85 to 100 then assign A Grade.”
In the above problem definition, after analysis, we identify set of Output Values and corresponding set of input values producing same output.
This analysis results in:Values from 0 to 39 produce E GradeValues from 40 to 49 produce D GradeValues from 50 to 69 produce C GradeValues from 70 to 84 produce B GradeValues from 85 to 100 produce A Grade
Equivalence Partitioning - An Illustration
53
Based on EP, we identify following input values to test each boundary:
For EP in range 0 to 39:EP for producing E Grade, input values are 0 to 39, Here Minimum Value= 0, Maximum Value = 39, Precision is 1. Thus, input values for testing for this EP for Grade E are:
Minimum Value= 0(Minimum Value+ precision)= 1(Maximum Value- precision)= 38Maximum Value= 39
Thus, input values for Boundary Values 0 and 39 are:0, 1, 38, 39
Output Value is: Grade E
Equivalence Partitioning - An Illustration
54
On the similar lines of analysis, we arrive at following input values for other
EPs and corresponding outputs as:
For 40 to 49, we have 40, 41, 48, 49 and Output Value is “Grade D”For 50 to 69, we have 50, 51, 68, 69 and Output Value is “Grade C”For 70 to 84, we have 70, 71, 83, 84 and Output Value is “Grade B”For 85 to 100, we have 85, 86, 99, 100 and Output Value is “Grade A”
In Addition to this, we have EP with input values -1 and 101 andcorresponding output value is “Error”.
Thus for -1 and 101, we have Output Value as “Error”
Equivalence Partitioning - An Illustration
55
For these boundaries, test cases based on EP technique are documented in Table. Test Case Design for a given example using EP Technique
Equivalence Partitioning - An Illustration
56
Boundary Value Analysis (BVA) technique is a test case design techniquethat allows to identify appropriate number of input values to besupplied to test the system.
Human brain is evolved in emotional side and less evolved in logical side Hence during software development, people always get confused in
using < and <=; > and >= relational operators. As such large number of errors tends to occur at boundaries of the input
domain. BVA leads to selection of test cases that exercise boundary values. BVA based test case design helps to write test cases that exercise
bounding values.
Boundary Value Analysis
57
“Bugs lurk in corners andcongregate at boundaries…”
---Boris Beizer
The test cases developed based on equivalence class partitioning can bestrengthened by use of a technique called boundary value analysis.
Many defects occur directly on, and above and below, the edges ofequivalence classes.
BVA is a software design technique to determine test cases off-by-oneerrors
Boundary Value Analysis
58
1. If an input condition for the software-under-test is specified as arange of values, develop valid test cases for the ends of the range,and invalid test cases for possibilities just above and below theends of the range.
For example if a specification states that an input value for a modulemust lie in the range between -1.0 and +1.0,
Valid tests that include values -1.0, +1.0 for ends of the range, as well as
Invalid test cases for values just above and below the ends -1.1, +1.1,should be included.
Boundary Value Analysis
59
2. If an input condition for the software-under-test is specified as anumber of values, develop valid test cases for the minimum andmaximum numbers as well as invalid test cases that include one lesserthan minimum and one greater than the maximum
For example, for the real-estate module mentioned previously thatspecified a house can have four owners,
Valid Tests that includes tests for minimum and maximum values 1, 5owners
Invalid tests that include one lesser than minimum and one greaterthan maximum 0, 6 are developed.
Boundary Value Analysis
60
If the input or output of the software-under-test is an ordered set, such as a table or a linear list, develop tests that focus on the first and last elements of the set
Example: Loan application
Boundary Value Analysis
61
Using BVA- RangeWhen the programming element is a range type, we can arrive at test cases
using BVA as follows:
For a range of values bounded by a and b, then test:(Minimum Value - precision)Minimum Value(Minimum Value + precision),(Maximum Value - precision)Maximum Value(Maximum Value + precision)
Boundary Value Analysis
62
Write Test Cases using BVA for a requirement that is stated as follows:“In the examination grading system, if the student scores 0 to less than 40 then
assign E Grade, if the student scores between 40 to 49 then assign D Grade, if thestudent scores between 50 to 69 then assign C Grade, if the student scoresbetween 70 to 84 then assign B Grade, and if the student scores 85 to 100 thenassign A Grade.”
In the above problem definition, after analysis, we identify following BoundaryValues:0 to 39
40 to 4950 to 6970 to 8485 to 100
BVA - An Illustration
63
Based on BVA, we identify following input values to test each boundary: For Boundary Values in range 0 to 39:
For 0 to 39 boundary values, Minimum Value= 0, Maximum Value = 39, Precision is 1. Thus, input values for testing for this boundary values are:(Minimum Value-precision)= -1 Minimum Value= 0(Minimum Value+ precision)= 1 (Maximum Value- precision)= 38Maximum Value= 39 (Maximum Value + precision)= 40
Thus, input values for Boundary Values 0 and 39 are:-1, 0, 1, 38, 39, 40.
On the similar lines of analysis, we arrive at following input values for other Boundary values:For 40 to 49, we have 39, 40, 41, 48, 49, 50For 50 to 69, we have 49, 50, 51, 58, 59, 60For 70 to 84, we have 69, 70, 71, 83, 84, 85For 85 to 100, we have 84, 85, 86, 99, 100, 101
BVA - An Illustration
64
BVA – Test Case Design