software testing unit 3 - wordpress.com · software testing – unit 3 prepared by dr. r. kavitha...
TRANSCRIPT
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 1
3.1 The need for levels of Testing
Execution-based software testing, especially for large systems, is usually carried out at
different levels.
At each level there are specific testing goals.
• For example, at unit test a single component is tested. A principal goal is to detect
functional and structural defects in the unit.
• At the integration level several components are tested as a group, and the tester
investigates component interactions.
• At the system level the system as a whole is tested and a principle goal is to evaluate
attributes such as usability, reliability, and performance.
• For both object-oriented and procedural-based software systems - the testing process
begins with the smallest units or components to identify functional and structural defects
• After the individual components have been tested, and any necessary repairs made, they
are integrated to build subsystems and clusters.
• Testers check for defects and adherence to specifications.
• Proper interaction at the component interfaces is of special interest at the integration
level.
• System test begins when all of the components have been integrated successfully. It
usually requires the bulk of testing resources.
• Laboratory equipment, special software, or special hardware may be necessary,
especially for real-time, embedded, or distributed systems. At the system level the tester
looks for defects, but the focus is on evaluating performance, usability, reliability, and
other quality-related requirements.
• If the system is being custom made for an individual client then the next step following
system test is acceptance test.
• This is a very important testing stage for the developers. During acceptance test the
development organization must show that the software meets all of the client’s
requirements.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 2
• Very often final payments for system development depend on the quality of the
software as observed during the acceptance test.
• Software developed for the mass market often goes through a series of tests called alpha
and beta tests.
• Alpha tests bring potential users to the developer’s site to use the software.
Developers note any problems.
• Beta tests send the software out to potential users who use it under real-world
conditions and report defects to the developing organization.
• Implementing all of these levels of testing require a large investment in time and
organizational resources.
• Organizations with poor testing processes tend to skimp on resources, ignore test
planning until code is close to completion, and omit one or more testing phases.
The approach used to design and develop a software system has an impact on how testers plan
and design suitable tests. There are two major approaches to system development—bottom-up,
and top-down.
• These approaches are supported by two major types of programming languages—
procedure-oriented and object-oriented.
• Levels of abstraction for the two types of systems are also somewhat different.
In traditional procedural systems,
• Lowest level of abstraction is described as a function or a procedure that performs some
simple task.
• The next higher level of abstraction is a group of procedures (or functions) that call one
another and implement a major system requirement. These are called subsystems.
• Combining subsystems finally produces the system as a whole, which is the highest level
of abstraction.
In object-oriented systems,
• Lowest level is viewed by some researchers as the method or member function.
• The next highest level is viewed as the class that encapsulates data and methods that
operate on the data.
• To move up one more level in an object-oriented -use the concept of the cluster, which is
a group of cooperating or related classes
• Finally, there is the system level, which is a combination of all the clusters and any
auxiliary code needed to run the system.
• When object-oriented development - Beneficial features were encapsulation, inheritance,
and polymorphism. These features would simplify design and development and
encourage reuse.
• However, testing of object-oriented systems is not straightforward due to these same
features.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 3
• For example, encapsulation can hide details from testers, and that can lead to uncovered
code.
• Inheritance also presents many testing challenges, among those the retesting of inherited
methods when they are used by a subclass in a different context.
*****
3.2. Unit Test
• The unit test is the lowest level of testing performed during software development
where individual units of software are tested in isolation from other parts of a
program.
• Since the software component being tested is relatively small in size and simple in
function, it is easier to design, execute, record, and analyze test results.
• If a defect is revealed by the tests it is easier to locate and repair since only the one
unit is under consideration.
• In a conventional structured programming language, such as C, the unit to be tested is
traditionally the function or sub-routine.
• In object oriented languages such as C++, the basic unit to be tested is the class.
When developing a strategy for unit testing, there are three basic organizational approaches that
can be taken. Top down, Bottom Up
Organizational Approaches
• Top Down Testing
• Bottom Up Testing
1. Top Down Testing
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 4
• In top down testing the unit at the top of the hierarchy is tested first. All called units are
replaced by stubs. As the testing progresses the stubs are replaced by actual units.
• Top down testing requires test stubs, but not test drivers.
• In the given example unit D is being tested and ABC have already been tested. All the
units below D have been replaced by test stubs.
• In top down testing units are tested from top to bottom. The units above a unit are calling
units and below are the called units. The units below the unit being tested are replaced by
stubs. As the testing progresses stubs are replaced by actual units
Stubs:
Stubs are dummy modules which are known as "called programs" which is used in integration
testing (top down approach), used when sub programs are under construction.
Drivers:
Drivers are also kind of dummy modules which are known as "calling programs", which is used
in bottom up integration testing, used when main programs are under construction.
Stub means a Dummy model of a particular module.
Real Life Example:
Suppose we have to test the integration between 2 modules A and B and we have developed
only module A while Module B is yet in development stage.
So in such case we can’t do integration test for module A but, if we prepare a dummy module,
having similar features like B then using that we can do integration testing.
Our main aim in this is to test Module A & not Module B. So that we can save time
otherwise we have to wait till the module B is actually developed. Hence this dummy module B
is called as Stub.
Now module B cannot send/receive data from module A directly/automatically so, in such
case we have to transfer data from one module to another module by some external features. This
external feature used is called Driver.
2. Bottom up Testing
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 5
• In bottom up testing the lowest level units are tested first. They are then used to test
higher level units. The process is repeated until you reach the top of the hierarchy.
• Bottom up testing requires test drivers but does not require test stubs.
• In the example unit D is the unit under test and all the units below it have been tested and
it is being called by test drivers instead of the units above it.
*****
3.3. Designing the unit test
• A test design should consist of these stages. i.e. Test Strategy, Test Planning, Test
Specification, Test Procedure.
• These four stages apply to all levels of testing including the unit testing.
• Test strategy and test planning are mainly project management activities and Test
procedure is the actual implementation of the test.
3.3.1.Test Planning
Goal of unit testing
• To insure that each individual software unit is functioning according to its specification.
• Good testing practice calls for unit tests that are planned and public. Planning includes
designing tests to reveal defects such as functional description defects, algorithmic
defects, data defects, and control logic and sequence defects.
• Resources should be allocated, and test cases should be developed, using both white and
black box test design strategies. The unit should be tested by an independent tester
(someone other than the developer) and the test results and defects found should be
recorded as a part of the unit history.
• Each unit should also be reviewed by a team of reviewers, preferably before the unit test.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 6
To prepare for unit test the developer/tester must perform several tasks. These are:
(i) Plan the general approach to unit testing;
(ii) Design the test cases, and test procedures (these will be attached to the test plan);
(iii) Define relationships between the tests;
(iv)Prepare the auxiliary code necessary for unit test.
Phase 1: Describe Unit Test Approach and Risks
• In this phase of unit testing planning the general approach to unit testing
is outlined. The test planner:
(i) identifies test risks;
(ii) describes techniques to be used for designing the test cases for the units;
(iii) describes techniques to be used for data validation and recording of test results;
(iv) describes the requirements for test harnesses and other software that interfaces with the units
to be tested, for example, any special objects needed for testing object-oriented units.
• The planner estimates resources needed for unit test, such as hardware, software, and
staff, and develops a tentative schedule under the constraints identified at that time.
Phase 2: Identify Unit Features to be Tested
• The planner determines which features of each unit will be tested, for example:
functions, performance requirements, states, and state transitions, control structures,
messages, and data flow patterns.
Phase 3: Add Levels of Detail to the Plan
• The planner adds new details to the approach, resource, and scheduling portions of
the unit test plan. As an example, existing test cases that can be reused for this project
can be identified in this phase.
• Unit availability and integration scheduling information should be included in the
revised version of the test plan. The planner must be sure to include a description of
how test results will be recorded.
• Test-related documents that will be required for this task, for example, test logs, and
test incident reports, should be described, and references to standards for these
documents provided. Any special tools required for the tests are also described.
3.3.2. Test Specification
Each unit test case should include four essential elements:
• A statement of the initial state of the unit, the starting point of the test case
• The inputs to the unit
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 7
• What the test case actually tests, in terms of the functionality of the unit and
the analysis used in the design of the test case
• The expected outcome of the test case
3.3.3. Process in Test Specification
Six step general process for developing a unit test specification as a set of individual unit test
cases.
Step 1 - Make it Run
Step 2 - Positive Testing
Step 3 - Negative Testing
Step 4 - Special Considerations
Step 5 - Coverage Tests
Step 6 - Coverage Completion
Step 1 - Make it Run
The purpose of the first test case in any unit test specification should be to execute the
unit under test in the simplest way possible.
Suitable techniques:
• Specification derived tests
• Equivalence partitioning
Step 2 - Positive Testing
Test cases should be designed to show that the unit under test does what it is supposed to do.
Suitable techniques:
o Specification derived tests
o Equivalence partitioning
o State-transition testing
Step 3 - Negative Testing
Test cases should be enhanced and further test cases should be designed to show that the
software does not do that it is not supposed to do.
Suitable techniques:
• Error guessing
• Boundary value analysis
• Internal boundary value testing
• State-transition testing
Step 4 - Special Considerations
Where appropriate, test cases should be designed to address issues related to performance,
safety and security requirements.
Suitable techniques:
• Specification derived tests
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 8
Step 5 - Coverage Tests
Add more test cases to the unit test specification to achieve specific test coverage objectives.
Suitable techniques:
• Branch testing
• Condition testing
• Data definition-use testing
• State-transition testing
Test Execution
• At this point the test specification can be used to develop an actual test procedure and
executed.
• Execution of the test procedure will identify errors in the unit which can be corrected and
the unit re-tested.
• Running of test cases will indicate whether coverage objectives have been achieved. If
not…
Step 6 - Coverage Completion
Where coverage objectives are not achieved, analysis must be conducted to determine why.
Failure to achieve a coverage objective may be due to:
o Infeasible paths or conditions
o Unreachable or redundant code
o Insufficient test cases
*****
3.4. Test Case Design Techniques
• Test case design techniques can be broadly split into two main categories.
• Black box techniques use the interface to a unit and a description of functionality, but do
not need to know how the inside of a unit is built.
• White box techniques make use of information about how the inside of a unit works.
• There are also some other techniques which do not fit into either of the above categories.
Error guessing falls into this category.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 9
Black Box Testing
Specification Derived Tests
• Test cases are designed by walking through the relevant specifications.
• Each test case should test one or more statements of specification.
• Positive test case design technique.
Example Specification
• Input - real number
• Output - real number
o Input - real number
o Output - real number
• When given an input of 0 or greater, the positive square root of the input shall be
returned.
• When given an input of less than 0, the error message "Square root error - illegal
negative input" shall be displayed and a value of 0 returned.
Equivalence Partitioning
• It is based upon splitting the inputs and outputs of the software under test into a
number of partitions
• Test cases should therefore be designed to test one value in each partition.
• Still positive test case design technique.
Boundary Value Analysis
• Similar to Equivalence Partitioning
• Assumes that errors are most likely to exist at the boundaries between partitions.
• Test cases are designed to exercise the software on and at either side of boundary values.
• Incorporates a degree of negative testing into the test design
State-Transition Testing
• Used where the software has been designed as a state machine
• Test cases are designed to test transition between states by generating events
• Negative testing can be done by using illegal combinations of states and events
White Box Testing
Branch Testing
• Designed to test the control flow branches
• E.g. if-then-else
Condition Testing
• Used to complement branch testing
• It tests logical conditions
• E.g. while(a<b), for
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 10
Error Guessing
• Based solely on the experience of the test designer
• The test cases are designed for the values which can generate errors
• If properly implemented it can be the most effective way of testing
Note about unit testing:
• Unit testing provides the earliest opportunity to catch the bugs
• And fixing them at this stage is the most economical
• Black Box and White Box techniques to develop individual test cases
• Unit testing will find bugs at a stage of the software development where they can be
corrected economically.
• Unit testing requires:
o That the design of units is documented in a specification
o That unit tests are designed from the specification before coding begins
o The expected outcomes of unit test cases are specified in the unit test specification
*****
3.5. Running the Unit tests and Recording results
Sample worksheet for recording the status of unit test
******
3.6. Integration Test
Goals
Integration test for procedural code has two major goals:
o to detect defects that occur on the interfaces of units;
o to assemble the individual units into working subsystems and finally a complete system that is ready for system test.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 11
Integration test planning
For conventional procedural/functional-oriented systems there are four major integration strategies—top-down and bottom-up, bidirectional and system integration. 1. Top down integration
Integration testing involves testing the topmost component interface with other components in same order as you navigate from top to bottom, till we cover all the components. For Example consider the following figure:
Example for top down integration
The integration testing starts with testing the interface between component 1 and component 2. To complete the integration testing, all interfaces mentioned in the figure covering all the arrows, have to be tested together. The order in which the interfaces are to be tested is shown in the following table.
Step Interfaces tested
1 1-2
2 1-3
3 1-4
4 1-2-5
5 1-3-6
6 1-3-6-(3-7)
7 (1-2-5)-(1-3-6-(3-7))
8 1-4-8
9 (1-2-5)-(1-3-6-(3-7))-(1-4-8)
Order of interfaces tested - TDT
To optimize the number of steps in integration testing , steps 6 and 7 can be combie and
execute as a single step. Similarly, steps 8 and 9 also can be combined and tested in a single step.
Combining steps does not a mean reduction in the number of interfaces tested. It just an
Component 1
t 1 Component 3 Component 2
t 1
Component 4
Component 5 Component 6 Component 7 Component 8
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 12
optimization in the elapsed time, and we do not have to wait for steps 6 and 8 to get over to start
with testing steps 7 and 9 respectively.
If a component at a higher level requires a modification every time module gets added to the
bottom. For each component addition, integration testing needs to be repeated starting form step
1.
Note: A breadth first appraoch will get the component order such as 1-2, 1-3, 1-4 and so on. And
a depth first order will get the component order such as 1-2-5, 1-3-6 and so on. In this example
breadth first appraoch was used.
2. Bottom up integration testing
The navigation in botttom up integration starts from component 1, covering all sub-
systems, till component 8 is reached. The order in which the interfaces have to be tested is shown
in table. The number of steps can be optimized into four steps, by combining steps 2 and 3 and
by combining steps 5-8. For an incremental product development, only the impacted and adde
interfaces need to be tested, covering all sub-systems and system components.
Example of bottom up integration – Arrows pointing up indicates integration path
Step Interfaces tested
1 1-5
2 2-6, 3-6
3 2-6-(3-6)
4 4-7
5 1-5-8
6 2-6-(3-6)-8
7 4-7-8
8 1-5-8-(2-6-(3-6)-8)-(4-7-8)
Order of interfaces tested – BUT
Component 8
t 1 Component 6 Component 5
t 1
Component 7
Component 1 Component 2 Component 3 Component 4
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 13
******
3.7. System Testing
• The testing conducted on the complete integrated products and solutions to evaluate
system compliance with specified requirements on functional and non functional aspects
is called system testing.
• A system is defined as a set of hardware, software and other parts that together provide
product features and solutions.
• System testing is the only phase of testing which tests the both functional and non
functional aspects of the product.
Functional side- Testing focuses on real time customer usage.
Non functional side- Focuses on quality factors
Why system testing?
• A independent team normally does system testing.
• This independent team is different from the team that does the component and integration
testing. The behavior of the complete product is verified during system testing.
1. Performance / Load testing: To evaluate the time taken or response time of the system to
perform its required functions in comparison with different versions of same products (s) or a
different component (s) is called performance testing.
2. Scalability testing: A testing that requires enormous amount of resource to find out the
maximum capability of the system parameter is called scalability testing.
3. Reliability testing: To evaluate the ability of the system or any independent component of the
system to perform its required functions repeatedly for a specified period of time is called
reliability testing.
4. Stress testing: Evaluating a system beyond the limits of the specified requirements or system
resources (disk space, memory, processor utilization) to ensure that the system does not break
down unexpectedly is called stress testing.
5. Interoperability testing: This testing is done to ensure that two or more products can
exchange information, use the information and work closely.
6. Localization testing: Testing conducted to verify that the localized products works in
different languages is called localization testing.
• System testing is performed on the basis of written test cases according to information
collected from detailed architecture/design documents, module specifications and system
requirements specifications.
• System test cases can also be developed bases on user stories, customer discussions, and
points made by observing typical customer usage.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 14
• System testing may not include much negative scenario verification such as testing for
incorrect and negative values.
• Since such negative testing already performed by component and integration testing.
• System testing started once unit, component and integration testing are completed.
System testing is done for the following reasons
• Provide independent perspective in testing
• Bring in customer perspective in testing
• Provide a “fresh pair of eyes” to discover defects not found earlier by testing.
• Test product behavior in a holistic, complete and realistic environment.
• Test both functional and non functional aspects of the product.
• Build confidence in the product.
• Analyze and reduce the risk of releasing the product.
• Ensure all requirements are met and ready the product for acceptance testing.
• Apart from verifying the pass or fail status, non functional tests results are also
determined by the amount of effort involved in executing them and any problems faced
during execution.
• Ex: Test met pass/fail after 10 th iterations, the experience is bad and the result cannot be
taken as pass.
Types of system tests
• Functional testing
• Performance testing
• Stress testing
• Configuration testing
• Security testing
• Recovery testing
• Reliability testing
• Usability testing
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 15
Figure: Types of system testing
• Not all software systems need to undergo all the types of system testing. Test planners
need to decide on the type of tests applicable to a particular software system.
• Decisions depend on the characteristics of the system and the available test resources.
• For example, if multiple device configurations are not a requirement for your system,
then the need for configuration test is not significant.
• During system test the testers can repeat these tests and design additional tests for the
system as a whole. The repeated tests can in some cases be considered regression test.
• Properly planned and executed system tests are excellent preparation for acceptance test.
• An important tool for implementing system tests is a load generator.
• A load generator is essential for testing quality requirements such as performance and
stress.
• A load is a series of inputs that simulates a group of transactions.
• A Transaction consists of a set of operations that may be performed by a Person, software
system, or a device that is outside the system.
• A use case can be used to describe a transaction.
• EX- system testing of telecommunication System needs a load that simulated a series of
phone calls (transactions) of particular types and lengths arriving from different locations.
• A load can be a real load, that is, we can put the system under test to real usage by having
actual telephone users connected to it. Loads can also be produced by tools called load
generators
Functional vs non functional testing
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 16
• Functional – testing a product’ functionality and features.
• Non functional – Testing the product’s quality factors.
Functional testing –
• Checking whether it met or not.
• Testing result normally depends on the product, not on the environment.
• It requires in-depth customer and product knowledge .
Non functional testing:
• Checking the quality factors.
• Testing requires the expected results to be documented in qualitative and quantifiable
terms.
• It requires large amount of resources and results are different for different configurations
and resources.
• It is very complex since it needs large amount of data.
3.7.1. P e r f o r m a n c e T e s t i n g
Definition:
The testing performed to evaluate the response time, throughput and utilization of the
system to execute its required functions in comparison with different versions of same
product(s) or a different competitive product(s) is called “Performance Testing “
• The goal of system performance tests is to see if the software meets the performance
requirements.
• Testers also learn from performance test whether there are any hardware or software
factors that impact on the system’s performance.
• Performance testing allows testers to tune the system; that is, to optimize the allocation
of system resources.
• Performance objectives must be articulated clearly by the users/clients. In the
requirements documents, and be stated clearly in the system test Plan.
• The objectives must be quantified. For example, a requirement that the system return a
response to a query in “a reasonable amount of time” Is not an acceptable requirement;
the time requirement must be specified In quantitative way.
• Results of performance tests are quantifiable.
• At the end of the tests the tester will know, for example, the number of CPU cycles used,
the actual response time in seconds (minutes, etc.), the actual number of transactions
processed per time period. These can be evaluated with respect to requirements
objectives.
• Resources for performance testing must be allocated in the system test plan.
• A source of transactions to drive the experiments. For example if you were performance
testing an operating system you need a stream of data that represents typical user
interactions. Typically the source of transaction for many systems is a load
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 17
• An experimental test bed that includes hardware and software the system-under-test
interacts with. The test bed requirements sometimes include special laboratory equipment
and space that must be reserved for the tests.
• Instrumentation or probes that help to collect the performance data. Probes may be
hardware or software in nature.
• A set of tools to collect, store, process, and interpret the data. Very often, large volumes
of data are collected, and without tools the testers may have difficulty in processing and
analyzing the data in order to evaluate true performance levels.
For example: If there is an application which can handle 25 simultaneous user login at a time.
In load testing we will test the application for 25 users and check how application is working
in this stage, in performance testing we will concentrate on the time taken to perform the
operation. Whereas in stress testing we will test with more users than 25 and the test will
continue to any number and we will check where the application is cracking the hardware
resources.
Load and Stress Testing
• Testing the app with maximum number of user and input is defined as load testing. While
testing the app with more than maximum number of user and input is defined as stress
testing.
• In load testing we measure the system performance based on a volume of users. While in
stress testing we measure the breakpoint of a system.
Example:
• If an app is build for 500 users, then for load testing we check up to 500 users and for
stress testing we check greater than 500.
• A banking application can take a maximum user load of 20000 concurrent users. Increase
the load to 21000 and do some transaction like deposit or withdraw. As soon as you did
the transaction, banking application server database will sync with ATM database server.
Now check with the user load of 21000 does this sync happened successfully. Now repeat
the same test with 22000 thousand concurrent users and so on.
Factors governing performance testing
A product is expected to handle multiple transactions in a given period. The capability of the
system or the product in handling multiple transactions is determined by a factor called
throughput.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 18
In the above example, it can be noticed that initially the throughput keeps increasing as the user
load increases. This is the ideal situation for any product and indicated that the product is capable
of delivering more when there are more users trying to use the product. In the second part of the
graph, beyond certain user load conditions (after the bend), it can be noticed that the throughput
comes down. The optimum throughput is represented by the saturation point and is the one that
represents the maximum throughput for the product.
Response time can be defined as the delay between the point of request and the first
response from the product.
Tuning is a procedure by which the product performance is enhanced by setting different
values to the parameters (Variables) of the product, operating system and other components.
Tuning improves the product performance without having to touch the source code of the
product. Another factor that needs to be considered for performance testing is performance of
competitive products. This type of performance testing where in competitive products are
compared is called benchmarking.
Lastly, performance testing requirement needs to be associated with the actual number or
percentage of improvement that is desired. For example, if a business transaction, say ATM
money withdrawal, should be completed within two minutes, the requirement needs to document
the actual response time expected.
One of the most important factors that affect performance testing is the availability of
resources. Both hardware and software is needed to derive the best results from performance
testing and for deployments. The exercise to find out what resources and configurations are
needed is called “Capacity Planning”.
Planning of Performance Testing
Collecting requirements is the first step in planning the performance testing. Typically
functionality testing has a definite set of inputs and outputs with a clear definition of expected
results. But, Performance testing requires clear documentation and environmental setup and the
expected results may not well know in advance. So, collecting requirements is a very big
challenge in performance testing.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 19
Note: A performance test can only be carried out for a completely automated product. A feature
involving a manual intervention cannot be performance tested as the results depend on how fast a
user responds with inputs to the product.
Secondly, a performance – testing requirements needs to clearly state what factors needs to
be measured and improved. Performance has several factors such as response time, latency,
throughput and resource utilization.
Sources for deriving performance requirements
1. Performance compared to the previous release of the same product: ATM withdrawal
transaction will be faster than the previous release by 10%.
2. Performance compared to the competitive product(s): ATM withdrawal with be as fast as or
faster than competitive product XYZ.
3. Performance compared to absolute numbers derived from actual need: “ATM machine should
be capable of handling 1000 transactions per day with each transaction not taking more than a
minute.
Note: Performance numbers derived from architecture and design. The architecture and design
goal are based on the performance expected for a particular load. So, source code should be
written in such a way to meet that numbers.
Types of requirements – Performance testing
1. Generic requirements: common across all products in the product domain area. All
products in that area are expected to meet those performance expectations. EX: time
taken to load a page, initial response when a mouse is clicked, time taken to navigate
between screens.
2. Specific Requirements: It depends on implementation of a particular product and differ
from one product to another in a given domain. Ex: Time taken to withdraw amount
from ATM
Writing Test Cases
Step 1: list of operations or business transactions to be tested.
Step 2: Steps for executing those operations/transactions.
Step 3: List of product, OS parameters that impact the performance testing and their values.
Step 4: Loading pattern
Step 5: Resource and their configuration (network, hardware and software configurations)
Step 6: Expected results
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 20
Step 7: Comparison of result with competitive product
While testing the product for different load patterns, it is important to increase the load or
scalability gradually to avoid any unnecessary effort in case of failures. For example, if an ATM
withdrawal fails for ten concurrent operations, there is no point in trying it for 10,000 operations.
The effort involved in testing for 10 concurrent operations may be several times lesser than that
of testing for 10,000 operations. Hence, a methodical approach is to gradually improve the
concurrent operations by say 10, 100, 1000, 10,000, and so on rather than trying to attempt
10,000 concurrent operations in the first iteration itself. The test case documentation should
clearly reflect this approach.
3.7.2. Configuration Testing
• Configuration testing is the process of testing a system with each of the supported
software and hardware configurations.
• During this testing Tester will test whether the s/w build is supporting different hardware
technologies or not
Ex: printer, scanners and topologies etc.,
• Testing the software against the hardware or testing the
software against the software is called configuration testing
• Configuration testing is the process of testing a system Under development on machines
which have various Combinations of hardware and software.
• In many situations the number of possible configurations is Far too large to test. For
example, suppose you are a Member of a test team which working on some desktop user
Application. The number of combinations of operating system Versions, memory sizes,
hard drive types, cpu’s alone could Be enormous. If you target only 10 different operating
System versions, 8 different memory sizes, 6 different hard Drives, and 7 different
CPU’s, there are already 10 * 8 * 6 * 7 = 3,360 different hardware configurations.
3.7.3. Security testing
• Security testing is a type of software testing that intends to uncover vulnerabilities of the
system and determine that its data and resources are protected from possible intruders.
• Websites are not meant only for publicity or marketing but these have been evolved into
the stronger tools to cater complete business needs.
• Web based payroll systems, shopping malls, banking, stock trade application are not only
being used by organizations but are also being sold as products today.
• This means that online applications have gained the trust of customers and users
regarding their vital feature named as security. No doubt, the security factor is of primary
value for desktop applications too.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 21
• However, when we talk about web, importance of security increases exponentially. If an
online system cannot protect the transaction data, no one will ever think of using it.
Examples of security flaws in an application:
1) A Student Management System is insecure if ‘Admission’ branch can edit the data of
‘Exam’ branch
2) An online Shopping Mall has no security if customer’s Credit Card Detail is not
encrypted
3) A custom software possess inadequate security if an SQL query retrieves actual
passwords of its users
• Typical security requirements may include specific elements of confidentiality, integrity,
authentication, availability, and authorization.
• Actual security requirements tested depend on the security requirements implemented by
the system. Security testing as a term has a number of different meanings and can be
completed in a number of different ways.
Confidentiality
• A security measure which protects against the disclosure of information to parties other
than the intended recipient.
Integrity
• A measure intended to allow the receiver to determine that the information provided by a
system is correct.
Authentication
• It is any process by which a system verifies the identity of a User who wishes to access it.
Authorization
• The process of determining that a requester is allowed to receive a service or perform an
operation.
Availability
• Assuring information and communications services will be ready for use when expected.
• Information must be kept available to authorized persons when they need it.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 22
RECOVERY TESTING
• In software testing, recovery testing is the activity of testing how well an application is
able to recover from crashes, hardware failures and other similar problems.
• Recovery testing is the forced failure of the software in a variety of ways to verify that
recovery is properly performed.
• Recovery testing should not be confused with reliability testing, which tries to discover
the specific point at which failure occurs.
• Recovery testing is basically done in order to check how fast and better the application
can recover against any type of crash or hardware failure etc.
• Type or extent of recovery is specified in the requirement specifications. It is basically
testing how well a system recovers from crashes, hardware failures, or other catastrophic
problems
Examples of recovery testing
• While an application is running, suddenly restart the computer, and afterwards check the
validness of the application's data integrity.
• While an application is receiving data from a network, unplug the connecting cable. After
some time, plug the cable back in and analyze the application's ability to continue
receiving data from the point at which the network connection disappeared.
• Restart the system while a browser has a definite number of sessions. Afterwards, check
that the browser is able to recover all of them.
3.7.4. Regression Testing
Regression testing is a type of software testing that seeks to uncover new software bugs,
or regressions, in existing functional and non-functional areas of a system after changes such as
enhancements, patches or configuration changes, have been made to them.
When to do regression testing?
1. Reasonable amount of initial testing is already carried out.
2. A good number of defects have been fixed
3. Defect fixes that can produce side-effects are taken care of.
4. Regression testing should be done periodically.
Methodology for RT
1. Performing an initial “smoke “ or “sanity test
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 23
Smoke testing
Smoke testing term came from hardware testing, when you get new hardware and power
it ON if smoke comes out then you do not proceed with its testing. In software testing, a smoke
test is run on application initial builds to ascertain application most critical areas are working
correctly and application is ready for thorough testing.
Sanity testing
Once a new build is received after minor changes, instead of starting its complete testing
sanity test is conducted to make sure previous defects has been fixed and new issues have not
been introduces by these fixes. Sanity testing is a subset of regression testing.
2. Understanding the criteria for selecting test cases
Two approaches for selecting test cases
1. Constant set of regression tests that are run for every build or change.
2. Selecting test cases dynamically
Selecting test cases requires knowledge of
1. Defect fixes and changes made in the current build
2. The way to test the current changes
3. The impact that the current changes may have on other parts of the system and
4. The ways of testing the other impacted parts.
• Include test cases that have produced the maximum defects in the past.
• Include test cases for a functionality in which a change has been made.
• Include test cases in which problems are reported.
• Include a test case which covers the mandatory requirements of the customer.
• Include test cases to test the positive test conditions.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 24
• Include the area which is highly visible to the users
NOTE: RT should focus more on the impact of defect fixes than on the critically of the defect
itself.
3. Classifying test cases
• It is important to know the relative priority of test cases for a successful test execution
• Fixing priority based on importance and customer usage
• Priority 0- These are called sanity test cases checks the basic functionality
• Run for accepting the build for further testing
• Run when a product goes through major changes
• It delivers a very high project values to – Development teams and the customers
• Priority 1
• It delivers a very high project values to – Development teams and the customers.
• Priority 2
• It delivers moderate project values. ( it will be used for regression testing on a
need basis)
4. Methodology for selecting test cases
Critical and
impact of
defect fixes
Selection of test cases
Additional test cases
Low Few test cases from TCDB
( TEST CASE DATA BASE)
-
Medium All Priority 0 and Priority 1 test cases
Test cases from priority 2 is desirable
but not necessary
High All Priority 0 and Priority 1 test cases
Subset of Priority 2 test cases
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 25
If there is not enough time and the risk of not doing an impact analysis is low, then alternative
methodology
1. Regress all: P - 0, 1, 2 test case are run- all test cases are executed.
2. Priority based regression: P - 0, 1, 2 are run in order based on the availability of time-
when to stop is based on availability of time.
3. Regress changes: code changes are compared to the last cycle of testing and test cases are
selected based on their impact on the code.
4. Random regression: Random test cases
5. Context based dynamic regression: Few P-0 test cases are selected
5. Resetting the test cases for regression testing
• Selecting test cases based on test case result history – Information about test cases are
recorded in each cycle is called TCRH.
• In many organization – Not all the types of testing nor all the test cases are repeated for
each cycle.
• Test case history – what test cases and when??
• Reset procedure : the method or procedure that uses TCH to indicate some of the test
cases be selected for regression testing is called reset procedure.
Points to be considered – Retesting
• When there is major changes in the product.
• Changes in the build procedure which affects the product
• Some of the test cases not executed for a long period
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 26
• When the product is in the final regression test cycle with a few selected test cases.
• Expected result of the test cases is quite different from the previous test cycle.
• Where there is a situation, that the expected results of the test cases could be quite
different from the previous cycles.
• The test cases relating to defect fixes and production problems need to be evaluated
release after release. In case they are found to be working fine, they can be reset.
• Whenever existing application functionality is removed, the related test cases can be
reset.
• Test cases that consistently produce a positive result can be removed.
• Test cases relating to a few negative test conditions (not producing any defects) can be
removed.
When the above guidelines are not met, we may want to rerun the test cases rather than reset the
results of the test cases. There are only a few differences between the rerun and reset states in
test cases. In both instances, the test cases arc executed but in the case of "reset" we can expect a
different result from what was obtained in the earlier cycles. In the case of rerun, the test cases
are expected to give the same test result as in the past; hence, the management need not be
unduly worried because those test cases are executed as a formality and are not expected to
reveal any major problem.
Test cases belonging to the "rerun" state help to gain confidence in the product by
testing for more time. Such test cases are not expected to fail or affect the release. Test cases
belonging to the "reset" state say that the test results can be different from the past, and only after
these test cases are executed can we know the result of regression and the release status.
A rerun state in a test case indicates low risk and reset status represents medium to high
risk for a release. Hence, close to the product release, it is a good practice to execute the "reset"
test cases first before executing the "rerun" test cases.
Since regression uses test cases that have already executed more than once, it is
expected that 100% of those test cases pass using the same build, if defect fixes are done right. In
situations where the pass percentage is not 100%, the test manager can compare with the
previous results of the test case to conclude whether regression was successful or not.
Current Result
from regression
Previous
result
Conclusion Remarks
FAIL PASS FAIL Need to improve the regression process and
code reviews
PASS FAIL PASS This is the expected result of a good
regression to say defect fixes work properly
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 27
FAIL FAIL FAIL Need to analyze why defect fixes are not
working. "Is it a wrong fix?" Also should
analyze why this test is rerun for regression
PASS PASS PASS This pattern of results gives a comfort feeling
that there are no.. side-effects due to defect
fixes
Best Practices in Regression Testing
1. Regression can be used for all types of releases.
2. Mapping defect identifiers with test cases improves regression quality.
3. Create and execute regression test bed daily.
4. Ask your best test engineer to select the test case.
5. Detect defects, and protect your product from defects and defect fixes.
3.7.5. Usability and Accessibility Testing
Usability Testing is:
A means for measuring how well people can use some human-made object (such as a
web page, a computer interface, a document, or a device) for its intended purpose.
The testing that validates the ease of use, speed and aesthetics of the product from the
user’s point of view is called usability testing.
Why usability can be validated- not tested?
Perceptions of good usability vary from user to user. For example, Developer would
consider use of command line flags as good user interface; an end user will want everything in
terms of GUI elements such as menus, dialog boxes and so on.
Characteristics of Usability testing
UT tests the product from the user’s point of view.
UT is for checking the product to see if it is easy to use for the various categories of
users.
UT is the process to identify discrepancies between the user interface of the product and
the human requirements in terms of the pleasantness and aesthetics aspects.
APPROACH TO USABILITY
In UT, certain human factors can be represented in a quantifiable way and can be tested
objectively. Ex: the no. of mouse clicks, no. of keystroke, and no. of commands used to
perform a task.
UT is not only for product binaries or executables. It also applies to documentation and
other deliverables that are shipped along with a product.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 28
AUTORUN script that automatically brings up product setup when the release media is
inserted in the machine. Sometimes this script is written for a particular operating system
version and may not get auto executed on a different OS version.
PEOPLE SUITED TO PERFORM UT
Typical representatives of the actual user segments who would be using the product, so
that the typical user patterns can be captured.
People who are new to the product, so that they can start without any bias and be able to
identify usability problem.
Generally it is difficult to develop test cases for usability. Checklists and guidelines are
prepared for UT.
Usability depends on message – system gives to its user.
Information – informational message is verified to find out whether an end user can
understand that message and associate it with the operation done.
Warning message – message is checked for why it happened and what to do to avoid the
warning.
Error message – what is the error, why that error happened, and what to do to avoid or
work around that error.
A system should intelligently detect and avoid wrong usage and if wrong usage cannot be
avoided it should provide appropriate and meaningful messages.
UT should go PT ( Positive Testing and Negative Testing) and NT- to know the correct
and incorrect usage of the product
VERIFICATION OF USABILITY DESIGN
Style sheets
• SS are grouping of user interface design elements.
• Use of SS ensures consistency of design elements across several screens and testing
– Ensures that the basic usability design is tested. SS checks the font size, color
scheme, and so on.
Screen prototypes
• Screens are designed as they will be shipped to the customer, but are not integrated
with other modules of the product.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 29
• User interface is tested independently without integrating with the functionality
modules.
• This prototype gives an idea of how exactly the screen will look and function when the
product is released.
• Test team and real life users test this prototype.
Paper design
• The design of the screen, layout and menus are drawn up on paper and sent to users for
feedback.
• Usage of style sheets requires further coding, prototypes need binaries and
resources to verify, but paper design do not require any other resources.
Layout design
• Layout helps in arranging different elements on the screen dynamically.
• It ensures arrangement of elements, spacing, size of fonts, pictures and so on the
screen.
USABILITY
Usability is a habit and a behavior. Just like humans, the products are expected to behave
differently and correctly with different users and to their expectations.
Checklist for usability testing
• Do users complete the assigned tasks/operations successfully?
• if so, how much time do they take to complete the tasks/operations?
• is the response from he product fast enough to satisfy them?
• where did the users get struck? What problems do they have?
• Where do they get confused? Were they able to continue on their own? What helped
them to continue?
QUALITY FACTORS FOR USABILITY
1. Comprehensibility
• Product should have simple and logical structure of features and documentation.
• They should be grouped on the basis of user scenarios and usage.
• Most frequently operations should be presented first using the user interfaces
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 30
2. Consistency
• A product should be consistent with any applicable standards, platform look and feel,
base infrastructure.
• Multiple products from the same company – consistency in the look and feel.
• User interfaces are different for different OS- services irritate the user.
3. Navigation
• It helps in determining how easy it is to select the different operations of the product.
• The number of mouse clicks should be minimized to perform any operation to
improve the usability.
4. Responsiveness
How fast the product responds to the user request.
Whenever the product is processing some information, the visual display should indicate
the progress and also the amount of time left so that the users can wait patiently till the
operations is completed.
AESTHETICS TESTING
• Important aspect in usability is making the product “Beautiful”.
• Aesthetics related problems in the product are generally mapped to a defect
classification called “ Cosmetic” which is low priority.
• It’s not possible for all products to measure up with the Taj mahal for its beauty.
Testing for aesthetics can at least ensure the product is pleasing to the eye.
• Aesthetics is not in the external look alone. It is in all the aspects such as colors, nice
icons, messages, screens, colors and images.
ACCESSIBILITY TESTING
Testing the product usability for physically challenged users is called accessibility
testing. For such users an alternative method of using the product have to be provided.
Accessibility of the product can be provided by two means
1. Making use of accessibility features provided by the underlying infrastructure ( ex : OS )
called basic accessibility and
2. Providing accessibility in the product through standards called product accessibility.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 31
BASIC ACCESSIBILITY
It is provided by the hardware and operating system.
Keyboard accessibility : Input and output devices of the computer and their accessibility options
are categorized under basic accessibility.
Ex: Keyboard – top of the keys are F keys
Sticky Keys
is an accessibility feature to help Windows users who have physical disabilities, but it
is also used by others as a means to reduce repetitive strain injury (or a syndrome called the
Emacs Pinky). It essentially serializes keystrokes instead of pressing multiple keys at a time:
StickyKeys allows the user to press and release a modifier key, such as Shift, Ctrl, Alt, or the
Windows key, and have it remain active until any other key is pressed.
Filter keys
Useful for stopping the repetitions completely or slowing down the repetition.
ToggleKeys
It is a feature of Microsoft Windows. It is an accessibility function which is designed for
people who have vision impairment or cognitive disabilities. When ToggleKeys turned on,
computer will provide sound cues when the locking keys ( Caps lock , Num lock )are pressed. A
high-pitched sound plays when the keys are switched on and a low-pitched sound plays when
they are switched off.
Sound Keys – Pronounce the key when they hit on the keyboard.
Arrow keys - to control mouse.
Screen accessibility
• Keyboard accessibility – for vision impaired and mobility impaired users
• For hearing impaired users – require visual feedback on the screen.
• Enabling caption for multimedia: all multimedia speech and sound can be enabled with
text equivalents and they are displayed on the screen when speech and sound are
played.
• Soft keyboards: mobility and vision impaired users – feel easier to use pointing devices
instead of the keyboard. Soft keyboard helps users by displaying the keyboard on the
screen. Characters can be typed by clicking on the keyboard layout on the screen
using pointing devices such as the mouse.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 32
• Easy reading with high contrast: vision impaired users have problems in recognizing
some colors and size of font in menu items. Toggle option is available to switch to a
high contrast mode. It uses pleasing colors.
PRODUCT ACCESSIBILITY
1. Text equivalents have to be provided for audio, video and picture images-improve the
accessibility for the hearing impaired.
2. Documents and fields should be organized. So, that they can be read without requiring a
particular resolution of the screen, and templates ( style sheet )
3. User interfaces should be designed so, that all information conveyed with color and also it
should be available without color.
Example:
USE GREEN BUTTON TO START THE PROGRAM. USE RED BUTTON TO STOP THE
RUNNING PROGRAM
Color blind users may not be able to select the right button for operations.
Correct approach – Retain the color and name of the buttons appropriately.
4. Reduce flicker rate, speed of moving text, avoid flashed and blinking rate.
Below average people – The reading speed of the below average people is normally low –
They may get irritating to see text that is blinking and flashing . People with good vision
find the flashed and flickers beyond a particular frequency uncomfortable. (usability
standards – 2 hz to 55 hz)
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 33
5. Reduce physical movement requirements for the users when designing the interface and allow
adequate time for user response. Spreading user interfaces elements to the corner of the screens
should be avoided.
Usability and Accessibility tools
Name of the tool Purpose
Jaws For testing accessibility of the product with some assistive technologies
HTML
Validator
To validate the HTML source file
Style sheet
validator
To validate the style sheets for usability standards set by W3C
Magnifier
Accessibility tool for vision challenged
Narrator
Reads the information displayed on the screen and creates audio
description for vision challenged users.
Soft keyboard
Enables the use of pointing devices to use the keyboard by displaying it
on the screen
USABILITY LAB SETUP
It has two sections – recording section and observation section.
Recording section – The user is requested to come to the lab with a prefixed set of operations
that are to performed in the recording section. The user is explained the product usage and given
the documentation in advance. The user comes prepared to perform the tasks.
Observation Section: Usability experts sit and observe the user body language and associated
the defects with the screens.
Observation made through one way glass – the expert can see the user but the user cannot see
the experts. Camera also monitors the different angles. After watching the different users use the
product, the usability experts suggest usability improvement in the product.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 34
3.7.6. Internationalization Testing
Internationalization testing is a non-functional testing technique. It is a process of designing a
software application that can be adapted to various languages and regions without any changes.
Internationalization testing is the process of verifying the application under test to work
uniformly across multiple regions and cultures.
The main purpose of internationalization is to check if the code can handle all international
support without breaking functionality that might cause data loss or data integrity issues.
Primer on Internationalization
Definition of language: Language is a tool used for communication- focuses on human
languages (such as Japanese, English , French ), Not on computer languages ( java, C and C++ )
which has a set of characters, a set of valid words formed from these characters ad grammar.
Character set
Standards - used to represent characters of different languages in the computer. Ex: ASCII- It is a
byte representation for characters (8 bits ) used in computer. It uses 8 bits to represent all
characters that are used in computers. Using this method ( 2 power 8 = 256 ) characters are
represented in binary.
Locale
Commercial software not only needs, to remember the language but also the country in which it
is spoken. There are conventions associated with langugep which need to be taken care of in the
software. There could be two countries speaking the same language with identical grammar,
words, and character set. However, there should be still variatons, such as currency and date
formats. A locale is a term used to differentiate these parameters and conventions.
For example, English language spoken in USA and India. However the currency symbol used in
the two countries are different ($ and Rs respectively). The punctuation symbols used in numbers
are also different. For example, 1,000,000 is represented in USA as 1,000,000 and as 10,00,000
in India. Software need, to remember the locale, apart from language, for it to function properly.
Internationalization (I18n): it means all activities that are required to make the software available
for international market.
Localization (L10n): It is a term used to mean the translation work of software resources such as
messages to the target language and conventions.
Globalization (G11n) : It is used to mean internationalization and localization.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 35
Enabling Testing
An activity of code review or code inspection mixed with some test cases for unit testing, with an
objective to catch I18n defects is called enabling testing.
Checklist for enabling testing
Check the code for hard –coded date, currency formats, ASCII code or character constants.
Check the dialog boxes and screens to see whether they leave at least 0.5 times more space for
expansion.
Ensure region-cultural based messages and slang is not in the code.
Ensure that adequate size is provided for buffers and variables to contain translated messages.
Check that no messages contain technical jargon and that all messages are understood even b the
least experienced user of the product.
If the code uses scrolling of text, then the screen and dialog boxes must allow adequate
provisions for direction change in scrolling h as top to bottom, right to left, left to right, bottom
to top, and so on as conventions are different in different languages. For example, Arabk uses
"right to left” direction for reading and "left to right for scrolling.
Reading Scrolling
Locale testing
Once the code has been verified for I18n and the enabling test is completed. Changing the
different locales using the system settings or environment variables and testing the software
functionality, number, date, time and currency format is called locale testing.
Checklist for locale testing
• Hot keys, function keys and help screens should be tested with different applicable
locales.
• Date and time format are in line with the defined locale of the language.
JAP
AN
ESE ENGLISH
ARABIC
JAP
AN
ESE
ENGLISH
ARABIC
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 36
• Currency is in line with the selected locale and language.
• Number format is in line with selected locale and language.
• Time zone information
Note: locale testing focuses on testing the conventions for number, punctuations, date and time
and currency formats.
Internationalization validation
It focuses on component functionality for input/output of non-English message.
Checklist for validation
• The functionality in all languages and locales are the same.
• The input to the software can be in non-ASCII or special characters using tools like IME
and can be entered and functionality must be consistent.
• The display of the non ASCII characters in the name is displayed as they were entered.
• The cut or copy and paste of non ASCII characters retain their styles after pasting, and
the software functions as expected.
• The software functions correctly with different languages words. For example , log in
should work with English user name and German user name.
• The documentation contains consistent documentation style, punctuations and all
language conventions are followed.
Fake Language Testing
Fake language testing helps in simulating the functionality of the localized product for a
different language using software translators.
This also ensures that switching between languages works properly and correct
messages are picked up from proper directories that have the translated messages. Fake language
testing helps in identifying the issues proactively before the product is localized. For this
purpose, all messages are consolidated from the software, and fake language conversions are
done by tools and tested. The fake language translators use English-like target languages, which
are easy to understand and test. Figure illustrates fake language testing.
Product English
Latin
Latin
Hello
…..
….
Hello
Ellohay
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 37
Documentation
The software documentation needs to be localized for the target language. For example
People in English speaking countries, they understand that the dirty t-shirt on the left hand side
when put inside the washing machine becomes a clean t-shirt as shown on the right-hand side.
Because people in this country read from left to right. If the picture is shown to people in Arab
countries they may understand that a clean t-shirt when put inside the washing machine becomes
dirty, since they read from right to left.
3.7.7.Ad hoc Testing
Adhoc testing is an informal testing type with an aim to break the system. This testing is
usually an unplanned activity. It does not follow any test design techniques to create test cases.
In fact is does not create test cases altogether! This testing is primarily performed if the
knowledge of testers in the system under test is very high. Testers randomly test the application
without any test cases or any business requirement document.
Ad hoc Testing does not follow any structured way of testing and it is randomly done on
any part of application. Main aim of this testing is to find defects by random checking. Adhoc
testing can be achieved with the testing technique called Error Guessing. Error guessing can be
done by the people having enough experience on the system to "guess" the most likely source of
errors.
This testing requires no documentation/ planning /process to be followed. Since this
testing aims at finding defects through random approach, without any documentation, defects
will not be mapped to test cases.
Characteristics of Ad-hoc testing
1. Ad-hoc testing is done after the completion of the formal testing on the application or
product.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 38
2. This testing is performed with the aim to break the application without following any
process.
3. The testers executing the ad-hoc testing should have thorough knowledge on the product.
4. The bugs found during ad-hoc testing exposes the loopholes of the testing process
followed.
5. Ad-hoc testing can be executed only once until and unless a defect is found which
requires retesting.
Advantages or benefits of Ad-hoc testing:
Below are few of the advantages or benefits related to the Ad-hoc testing:
1. Ad-hoc testing gives freedom to the tester to apply their own new ways of testing the
application which helps them to find out more number of defects compared to the formal
testing process.
2. This type of testing can be done at anytime anywhere in the Software Development Life
cycle (SDLC) without following any formal process.
3. This type of testing is not only limited to the testing team but this can also be done by the
developer while developing their module which helps them to code in a better way.
4. Ad-hoc testing proves to be very beneficial when there is less time and in-depth testing of
the feature is required. This helps in delivering the feature with quality and on time.
5. Ad-hoc testing can be simultaneously executed with the other types of testing which
helps in finding more bugs in lesser time.
6. In this type of testing the documentation is not necessary which helps the tester to do the
focused testing of the feature or application without worrying about the formal
documentation.
Disadvantages of Ad-hoc testing:
1. The test scenarios executed during the ad-hoc testing are not documented so the tester has
to keep all the scenarios in their mind which he/she might not be able to recollect in
future.
2. Ad-hoc testing is very much dependent on the skilled tester who has thorough knowledge
of the product it cannot be done by any new joiner of the team.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 39
Types of Ad-hoc testing
Basically there are three types of Ad-hoc testing. They are:
1. Buddy testing: This type of testing is done by the developer and the tester who are
responsible for that particular module delivery. In this type of testing the developer and tester
will sit together and work on that particular module in order to avoid from building the invalid
scenarios which also in other hand help the tester from reporting the invalid defects.
2. Pair testing: In this type of testing two testers work together on one module. They basically
divide the testing scenarios between them. The aim of this type of testing is to come up with
maximum testing scenarios so that the entire module should have complete test coverage. Post
testing the entire module together they can also document their test scenarios and observations.
3. Monkey testing: In this type of testing some random tests are executed with some random
data with the aim of breaking the system. This testing helps us to discover some new bugs which
might not be caught earlier.
3.7.8. Testing Object-Oriented Systems
Object oriented Testing is a collection of testing techniques to verify and validate object-oriented
software.
Testing is a continuous activity during software development. In object-oriented systems, testing
encompasses three levels, namely, unit testing, subsystem testing, and system testing.
Unit Testing
In unit testing, the individual classes are tested. It is seen whether the class attributes are
implemented as per design and whether the methods and the interfaces are error-free. Unit
testing is the responsibility of the application engineer who implements the structure.
Subsystem Testing
This involves testing a particular module or a subsystem and is the responsibility of the
subsystem lead. It involves testing the associations within the subsystem as well as the
interaction of the subsystem with the outside. Subsystem tests can be used as regression tests for
each newly released version of the subsystem.
System Testing
System testing involves testing the system as a whole and is the responsibility of the quality-
assurance team. The team often uses system tests as regression tests when assembling new
releases.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 40
Object-Oriented Testing Techniques
Grey Box Testing
The different types of test cases that can be designed for testing object-oriented programs are
called grey box test cases. Some of the important types of grey box testing are −
• State model based testing − This encompasses state coverage, state transition coverage,
and state transition path coverage.
• Use case based testing − Each scenario in each use case is tested.
• Class diagram based testing − Each class, derived class, associations, and aggregations
are tested.
• Sequence diagram based testing − This methods is used to check the sequence
diagrams.
Techniques for Subsystem Testing
The two main approaches of subsystem testing are −
• Thread based testing − All classes that are needed to realize a single use case in a
subsystem are integrated and tested.
• Use based testing − The interfaces and services of the modules at each level of hierarchy
are tested. Testing starts from the individual classes to the small modules comprising of
classes, gradually to larger modules, and finally all the major subsystems.
Categories of System Testing
• Alpha testing − This is carried out by the testing team within the organization that
develops software.
• Beta testing − This is carried out by select group of co-operating customers.
• Acceptance testing − This is carried out by the customer before accepting the
deliverables.
Software Quality Assurance
Software Quality
Schulmeyer and McManus have defined software quality as “the fitness for use of the total
software product”. A good quality software does exactly what it is supposed to do and is
interpreted in terms of satisfaction of the requirement specification laid down by the user.
Quality Assurance
Software quality assurance is a methodology that determines the extent to which a software
product is fit for use. The activities that are included for determining software quality are −
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 41
• Auditing
• Development of standards and guidelines
• Production of reports
• Review of quality system
Quality Factors
• Correctness − Correctness determines whether the software requirements are
appropriately met.
• Usability − Usability determines whether the software can be used by different categories
of users (beginners, non-technical, and experts).
• Portability − Portability determines whether the software can operate in different
platforms with different hardware devices.
• Maintainability − Maintainability determines the ease at which errors can be corrected
and modules can be updated.
• Reusability − Reusability determines whether the modules and classes can be reused for
developing other software products.
Object-Oriented Metrics
Metrics can be broadly classified into three categories: project metrics, product metrics, and
process metrics.
Project Metrics
Project Metrics enable a software project manager to assess the status and performance of an
ongoing project. The following metrics are appropriate for object-oriented software projects −
• Number of scenario scripts
• Number of key classes
• Number of support classes
• Number of subsystems
Product Metrics
Product metrics measure the characteristics of the software product that has been developed. The
product metrics suitable for object-oriented systems are −
• Methods per Class − It determines the complexity of a class. If all the methods of a class
are assumed to be equally complex, then a class with more methods is more complex and
thus more susceptible to errors.
• Inheritance Structure − Systems with several small inheritance lattices are more well–
structured than systems with a single large inheritance lattice. As a thumb rule, an
inheritance tree should not have more than 7 (± 2) number of levels and the tree should
be balanced.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 42
• Coupling and Cohesion − Modules having low coupling and high cohesion are
considered to be better designed, as they permit greater reusability and maintainability.
• Response for a Class − It measures the efficiency of the methods that are called by the
instances of the class.
Process Metrics
Process metrics help in measuring how a process is performing. They are collected over all
projects over long periods of time. They are used as indicators for long-term software process
improvements. Some process metrics are −
• Number of KLOC (Kilo Lines of Code)
• Defect removal efficiency
• Average number of failures detected during testing
• Number of latent defects per KLOC
3.7.9. Configuration Testing
Configuration testing is the method of testing an application with multiple combinations
of software and hardware to find out the optimal configurations that the system can work without
any flaws or bugs.
Configuration Testing is a software testing where the application under test has to be
tested using multiple combinations of Software and Hardware.
Configuration Testing Example
Let's understand this with an example of a Desktop Application:
Generally, Desktop applications will be of 2 tier or 3 tier, here we will consider a 3 tier Desktop
application which is developed using Asp.Net and consists of Client, Business Logic Server and
Database Server where each component supports below-mentioned platforms.
Client Platform - Windows XP, Window7 OS, windows 8 OS , etc
Server Platform - Windows Server 2008 R2,Windows Server 2008 R2, Windows Server 2012R2
Database –SQL Sever 2008, SQL Server 2008R2, SQL Server 2012, etc.
A tester has to test the Combination of Client, Server and Database with combinations of the
above-mentioned platforms and database versions to ensure that the application is functioning
properly and does not fail.
Configuration testing is not only restricted to Software but also applicable for Hardware which is
why it is also referred as a Hardware configuration testing, where we test different hardware
devices like Printers, Scanners, Web cams, etc. that support the application under test.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 43
Pre-requisites for Configuration Testing
For any project before starting with the configuration testing, we have to follow some pre-
requisites
• Creation of matrix which consists of various combinations of software and hardware
configurations
• Prioritizing the configurations as its difficult to test all the configurations
• Testing every configuration based on prioritization.
Objectives of Configuration Testing
The objectives of configuration Testing is to
• Validating the application to determine if it fulfills the configurability requirements
• Manually causing failures which help in identifying the defects that are not efficiently
found during testing (Ex: changing the regional settings of the system like Time Zone,
Language, Date time formats, etc.)
• Determine an optimal configuration of the application under test.
• Analyzing the system performance by adding or modifying the hardware resources like
Load Balancers, increase or decrease in memory size, connecting various printer models,
etc.
• Analyzing system Efficiency based on the prioritization, how efficiently the tests were
performed with the resources available to achieve the optimal system configuration.
• Verification of the system in a geographically distributed Environment to verify how
effectively the system performs.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 44
For Ex: Server at a different location and clients at a different location, the system should work
fine irrespective of the system settings.
• Verifying how easily the bugs are reproducible irrespective of the configuration changes.
• Ensuring how traceable the application items are by properly documenting and
maintaining the versions which are easily identifiable.
• Verifying how manageable the application items are throughout the software
development life cycle.
Types of Configuration testing
Two types of configuration testing as mentioned below
• Software Configuration Testing
• Hardware Configuration Testing
Software Configuration Testing
Software configuration testing is testing the Application under test with multiple OS, different
software updates, etc. Software Configuration testing is very time consuming as it takes time to
install and uninstall different software's that is used for the testing.
One of the approaches that is followed to test the software configuration is to test on Virtual
Machines. Virtual Machine is an Environment that is installed on software and acts like a
Physical Hardware and users will have the same feel as of a Physical Machine. Virtual Machines
simulates real-time configurations.
Instead of Installing and uninstalling the software in multiple physical machines which is time-
consuming, it's always better to install the application/software in the virtual machine and
continue testing. This process can be performed by having multiple virtual machines, which
simplifies the job of a tester
Software configuration testing can typically begin when
• Configurability requirements to be tested are specified
• Test Environment is ready
• Testing Team is well trained in configuration testing
• Build released is unit and Integration test passed
Typical Test Strategy that is followed to test the software configuration test is to run the
functional test suite across multiple software configurations to verify if the application under test
is working as desired without any flaws or errors.
Another strategy is to ensure the system is working fine by manually failing the test cases and
verifying for the efficiency.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 45
Hardware Configuration Testing
Hardware configuration testing is generally performed in labs, where we find physical machines
with different hardware attached to them.
Whenever a build is released, the software has to be installed in all the physical machines where
the hardware is attached, and the test suite has to be run on each machine to ensure that the
application is working fine.
To perform the above task a significant amount of effort is required to install the software on
each machine, attach the hardware and manually running or even to automate the above said
process and running the test suite.
Also, while performing hardware configuration test, we specify the type of hardware to be tested,
and there are a lot of computer hardware and peripherals which make it quite impossible to run
all of them. So it becomes the duty of the tester to analyze what hardware is mostly used by users
and try to make the testing based on the prioritization.
Sample Test Cases
Consider a Banking Scenario to test for the hardware compatibility. A Banking Application that
is connected to Note Counting Machine has to be tested with different models like Rolex, Strob,
Maxsell, StoK, etc.
Let's take some sample test cases to test the Note Counting Machine
• Verifying the connection of application with Rolex model when the prerequisites are
NOT installed
• Verifying the connection of application with Rolex model when the prerequisites are
installed
• Verify if the system is counting the notes correctly
• Verify if the system is counting the notes incorrectly
• Verifying the tampered notes
• Verifying the response times
• Verifying if the fake notes are detected and so on
The above test cases are for one model, and the same has to be tested with all the models
available in the market by setting them up in a test lab which is difficult.
Configuration Testing should be given with equal importance like other testing types. Without
configuration testing being performed it is difficult to analyze the optimal system performance
and also software might encounter compatibility issues that it is supposed to run on.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 46
3.7.9. Website Testing
Web testing is the name given to software testing that focuses on web applications.
Complete testing of a web-based system before going live can help address issues before the
system is revealed to the public.
Challenges in Web application testing:
Web based systems and applications reside on network and interoperate with many different
1. Operating systems,
2. Browsers,
3. Hardware platforms,
4. Communications protocols,
Dimensions of Quality for Web Applications:
Quality is in corporate in to a web application as a consequence of good design. Reviews and
Testing examine one or more of the following quality dimensions.
1. Content 2.Function 3.Structure 4.Usability5.Navigaility 6.Performance7.Compatibility
8.lnteroperability 9.Security.
Testing approach for web application
• The content model for the web app is reviewed to uncover errors.
• The interface model is reviewed to ensure that all use cases can be accommodated.
• The design model for the web app is reviewed to uncover navigation errors.
• The user interface is tested to uncover errors in presentation and/or navigation mechanics
• Functional components are unit tested.
• Navigation throughout the architecture should be tested.
• The web app is implemented in a variety of different environmental configurations and is
tested for compatibility with each configuration.
• Security tests are conducted in an attempt to exploit vulnerabilities in the web app or within
its environment
• Performance tests should be conducted.
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 47
• The web app is tested by a controlled and monitored population of end users the results of
their interaction with the system are evaluated for content and navigation errors, usability
concerns, compatibility concerns, and the web app security, reliability, and performance.
Type of Testing
Content testing has three important objectives
1. To uncover syntactic errors (for eg., typo, grammar mistakes) in the text-based documents,
graphical representations, and other media
2. To uncover semantic errors (i.e., focuses on the information presented within each content
object)
3. To find errors in the organization or structure of the content that is presented to the end user.
Database testing
• Tests should be designed to uncover errors made in translating the user's request into a form
that can be processed by the DBMS.
• Tests that uncover errors in communication between the web app and the remote database must
be developed.
• Raw data acquired from the database must be transmitted to the web app server and properly
formatted for subsequent transmittal to the client.
• Tests that demonstrate the validity of the transformations applied to the raw data to create valid
content objects must also be created.
• Content and compatibility testing will be done after the dynamic content object is transmitted
to the client in a form that can be displayed to end user.
User Interface Testing
Interface features include type fonts, the use of colours, frames, images, borders, tables and
related interface features that are generated as web app execution proceeds should be tested.
When a user interacts with a web app, the interaction occurs through one or more interface
mechanisms.
Links:
Each navigation link is tested to ensure that the proper content object or function is reached.
Forms:
SOFTWARE TESTING – UNIT 3
Prepared by Dr. R. Kavitha Page 48
At a macroscopic level, tests are performed to ensure that Labels correctly identify fields within
the form and that mandatory fields are identified visually for the user
• The server receives all information contained within the form and that no data are lost in the
transmission between client and server
• Appropriate defaults are used when the user does not select from a pull-down menu or set of
buttons
• Browser functions (e.g., back arrow) do not corrupt data entered in a form.
Pop-up windows:
A series of tests ensure that
• The pop-up is properly sized and positioned
• The pop-up does not cover the original web app window
• The aesthetic design of the pop-up is consistent with aesthetic design of the interface
• Scroll bars and other control mechanisms appended to the pop-up are properly located and
function is required.
Component level testing
Each web app function is a software component and can be tested using BBT and WBT.
Black Box techniques are equivalence partitioning, boundary value analysis.
White box testing: path testing
Configuration testing
The configuration testing has to perform to address the following questions:
• Do system security measures (e.g., firewalls or encryption) allow the webapp to execute
and service users without interference or performance degradation?
• Is the webapp properly integrated with database software?
• Is the webapp sensitive to different versions of database software?
• Do server-side webapp scripts execute properly?