destructive behavior with constructive attitude. testing is the process of evaluating a system or...
Post on 30-Dec-2015
224 Views
Preview:
TRANSCRIPT
Software TestingDestructive Behavior with Constructive Attitude
Testing Terminologies
Testing is the process of evaluating a system or its component(s) with the intent to find whether it satisfies the specified requirements or not. In simple words, testing is executing a system in order to identify any gaps, errors, or missing requirements in contrary to the actual requirements.
According to ANSI/IEEE 1059 standard, Testing can be defined as - A process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.
Who does Testing? It depends on the process and the associated stakeholders of
the project(s). In the IT industry, large companies have a team with responsibilities to evaluate the developed software in context of the given requirements. Moreover, developers also conduct testing which is called Unit Testing. In most cases, the following professionals are involved in testing a system within their respective capacities: Software Tester Software Developer Project Lead/Manager End User
Different companies have different designations for people who test the software on the basis of their experience and knowledge such as Software Tester, Software Quality Assurance Engineer, QA Analyst, etc.
It is not possible to test the software at any time during its cycle. The next two sections state when testing should be started and when to end it during the SDLC.
When to Start Testing?
An early start to testing reduces the cost and time to rework and produce error-free software that is delivered to the client.
However in Software Development Life Cycle (SDLC), testing can be started from the Requirements Gathering phase and continued till the deployment of the software.
It also depends on the development model that is being used. For example, in the Waterfall model, formal testing is
conducted in the testing phase; but in the incremental model, testing is performed at the end of every increment/iteration and the whole application is tested at the end.
Testing is done in different forms at every phase of SDLC: During the requirement gathering phase, the
analysis and verification of requirements are also considered as testing.
Reviewing the design in the design phase with the intent to improve the design is also considered as testing.
Testing performed by a developer on completion of the code is also categorized as testing.
When to Stop Testing?
It is difficult to determine when to stop testing, as testing is a never-ending process and no one can claim that a software is 100% tested. The following aspects are to be considered for stopping the testing process: Testing Deadlines Completion of test case execution Completion of functional and code coverage to a
certain point Bug rate falls below a certain level and no high-
priority bugs are identified Management decision
Verification & Validation
These two terms are very confusing for most people, who use them interchangeably. The following table highlights the differences between verification and validation.
Verification & ValidationS.N
.Verification Validation
1Verification addresses the concern: "Are you
building it right?"
Validation addresses the concern: "Are you
building the right thing?"
2Ensures that the software system meets all the
functionality.
Ensures that the functionalities meet the intended
behavior.
3Verification takes place first and includes the
checking for documentation, code, etc.
Validation occurs after verification and mainly
involves the checking of the overall product.
4 Done by developers. Done by testers.
5
It has static activities, as it includes collecting
reviews, walkthroughs, and inspections to verify
a software.
It has dynamic activities, as it includes executing
the software against the requirements.
6It is an objective process and no subjective
decision should be needed to verify a software.
It is a subjective process and involves subjective
decisions on how well a software works.
Some of the most common myths about software testing.
Myth 1 : Testing is Too ExpensiveReality : There is a saying, pay less for testing during software development or
pay more for maintenance or correction later. Early testing saves both time and cost in many aspects, however reducing the cost without testing may result in improper design of a software application rendering the product useless.
Myth 2 : Testing is Time-ConsumingReality : During the SDLC phases, testing is never a time-consuming process.
However diagnosing and fixing the errors identified during proper testing is a time-consuming but productive activity.
Myth 3 : Only Fully Developed Products are TestedReality : No doubt, testing depends on the source code but reviewing
requirements and developing test cases is independent from the developed code. However iterative or incremental approach as a development life cycle model may reduce the dependency of testing on the fully developed software.
Myth 4 : Complete Testing is PossibleReality : It becomes an issue when a client or tester thinks that complete testing
is possible. It is possible that all paths have been tested by the team but occurrence of complete testing is never possible. There might be some scenarios that are never executed by the test team or the client during the software development life cycle and may be executed once the project has been deployed.
Myths continues… Myth 5 : A Tested Software is Bug-Free Reality : This is a very common myth that the clients, project
managers, and the management team believes in. No one can claim with absolute certainty that a software application is 100% bug-free even if a tester with superb testing skills has tested the application.
Myth 6 : Missed Defects are due to Testers Reality : It is not a correct approach to blame testers for bugs that
remain in the application even after testing has been performed. This myth relates to Time, Cost, and Requirements changing Constraints. However the test strategy may also result in bugs being missed by the testing team.
Myth 7 : Testers are Responsible for Quality of Product Reality : It is a very common misinterpretation that only testers or
the testing team should be responsible for product quality. Testers’ responsibilities include the identification of bugs to the stakeholders and then it is their decision whether they will fix the bug or release the software. Releasing the software at the time puts more pressure on the testers, as they will be blamed for any error.
Myths continues…
Myth 8 : Test Automation should be used wherever possible to Reduce Time
Reality : Yes, it is true that Test Automation reduces the testing time, but it is not possible to start test automation at any time during software development. Test automaton should be started when the software has been manually tested and is stable to some extent. Moreover, test automation can never be used if requirements keep changing.
Myth 9 : Anyone can Test a Software ApplicationReality : People outside the IT industry think and even believe that anyone
can test a software and testing is not a creative job. However testers know very well that this is a myth. Thinking alternative scenarios, try to crash software with the intent to explore potential bugs is not possible for the person who developed it.
Myth 10: A Tester's only Task is to find BugsReality: Finding bugs in software is the task of the testers, but at the same
time, they are domain experts of the particular software. Developers are only responsible for the specific component or area that is assigned to them but testers understand the overall workings of the software, what the dependencies are, and the impacts of one module on another module.
Fault, Error, Bugs and Failure
What is a Fault? Software fault is also known as defect, arises when
the expected result don't match with the actual results. It can also be error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and errors made by developers, architects.
Fault Types Following are the fault types associated with any: Business Logic Faults Functional and Logical Faults Faulty GUI Performance Faults Security Faults
Fault, Error, Bugs and Failure
Preventing FaultsFollowing are the methods for preventing programmers from introducing Faulty code during development: Programming Techniques adopted Software Development methodologies Peer Review Code Analysis
Fault, Error, Bugs and Failure
What is a Defect? A software bug arises when the expected result
don't match with the actual results. It can also be error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and errors made by developers, architects.
Following are the methods for preventing programmers from introducing bugs during development: Programming Techniques adopted Software Development methodologies Peer Review Code Analysis
Fault, Error, Bugs and Failure
Common Types of DefectsFollowing are the common types of defects that occur during development: Arithmetic Defects Logical Defects Syntax Defects Multithreading Defects Interface Defects Performance Defects
Fault, Error, Bugs and Failure
What is a Failure? Under certain circumstances, the product may
produce wrong results. It is defined as the deviation of the delivered service from compliance with the specification.
Not all the defects result in failure as defects in dead code do not cause failure.
Flow Diagram for Failure
Fault, Error, Bugs and Failure
Reasons for Failure Environmental conditions, which might cause hardware
failures or change in any of the environmental variables. Human Error while interacting with the software by
keying in wrong inputs. Failures may occur if the user tries to perform some
operation with intention of breaking the system. Results of Failure
Loss of Time Loss of Money Loss of Business Reputation. Injury Death
Fault, Error, Bugs and Failure
What is a Bug? In Software testing, when the expected and
actual behavior is not matching, an incident needs to be raised. An incident may be a Bug. It is a programmer's fault where a programmer intended to implement a certain behavior, but the code fails to correctly conform to this behavior because of incorrect implementation in coding. It is also known as Defect.
Following is the workflow of Bug Life Cycle:
Fault, Error, Bugs and Failure
Life Cycle of a Bug:Life Cycle of a Bug:Life Cycle of a Bug:
Life Cycle of a Bug:
Fault, Error, Bugs and Failure
Parameters of a Bug:The Following details should be part of a Bug:
Date of issue, author, approvals and status. Severity and priority of the incident. The associated test case that revealed the problem Expected and actual results. Identification of the test item and environment. Description of the incident with steps to Reproduce Status of the incident Conclusions, recommendations and approvals.
Test Case
What is Test case? A test case is a document, which has a set
of test data, preconditions, expected results and postconditions, developed for a particular test scenario in order to verify compliance against a specific requirement.
Test Case acts as the starting point for the test execution, and after applying a set of input values, the application has a definitive outcome and leaves the system at some end point or also known as execution postcondition.
Test Case Parameters
Typical Test Case Parameters: Test Case ID Test Scenario Test Case Description Test Steps Prerequisite Test Data Expected Result Test Parameters Actual Result Environment Information Comments
Test Case Example
Example: Let us say that we need to check an input
field that can accept maximum of 10 characters.
While developing the test cases for the above scenario, the test cases are documented the following way. In the below example, the first case is a pass scenario while the second case is a FAIL.
Test Case Example
Scenario Test Step Expected Result Actual Outcome
Verify that the input field that can accept maximum of 10 characters
Login to application and key in 10 characters
Application should be able to accept all 10 characters.
Application accepts all 10 characters.
Verify that the input field that can accept maximum of 11 characters
Login to application and key in 11 characters
Application should NOT accept all 11 characters.
Application accepts all 10 characters.
If the expected result doesn't match with the actual result, then we log a defect. The defect goes through the defect life cycle and the testers address the same after fix.
Test case Design Technique
Following are the typical design techniques in software engineering:1. Deriving test cases directly from a requirement specification or black box
test design technique. The Techniques include: Boundary Value Analysis (BVA) Equivalence Partitioning (EP) Decision Table Testing State Transition Diagrams Use Case Testing
2. Deriving test cases directly from the structure of a component or system: Statement Coverage Branch Coverage Path Coverage LCSAJ Testing
3. Deriving test cases based on tester's experience on similar systems or testers intuition:
Error Guessing Exploratory Testing
What is a Test Suite?
Test suite is a container that has a set of tests which helps testers in executing and reporting the test execution status. It can take any of the three states namely Active, Inprogress and completed.
A Test case can be added to multiple test suites and test plans. After creating a test plan, test suites are created which in turn can have any number of tests.
Test suites are created based on the cycle or based on the scope. It can contain any type of tests, viz - functional or Non-Functional.
Test Suite - Diagram
SOFTWARE TESTING METHODS
Black-Box Testing
The technique of testing without having any knowledge of the interior workings of the application is called black-box testing. The tester is oblivious to the system architecture and does not have access to the source code. Typically, while performing a black-box test, a tester will interact with the system's user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.
The following table lists the advantages and disadvantages of black-box testing.
Black-Box Testing
Advantages Disadvantages
Well suited and efficient for large
code segments.
Code access is not required.
Clearly separates user's perspective
from the developer's perspective
through visibly defined roles.
Large numbers of moderately skilled
testers can test the application with
no knowledge of implementation,
programming language, or operating
systems.
Limited coverage, since only a
selected number of test scenarios is
actually performed.
Inefficient testing, due to the fact
that the tester only has limited
knowledge about an application.
Blind coverage, since the tester
cannot target specific code
segments or error-prone areas.
The test cases are difficult to design.
White-Box Testing
White-box testing is the detailed investigation of internal logic and structure of the code. White-box testing is also called glass testing or open-box testing. In order to perform white-box testing on an application, a tester needs to know the internal workings of the code.
The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.
The following table lists the advantages and disadvantages of white-box testing.Advantages Disadvantages
As the tester has knowledge of the
source code, it becomes very easy
to find out which type of data can
help in testing the application
effectively.
It helps in optimizing the code.
Extra lines of code can be removed
which can bring in hidden defects.
Due to the tester's knowledge about
the code, maximum coverage is
attained during test scenario
writing.
Due to the fact that a skilled tester
is needed to perform white-box
testing, the costs are increased.
Sometimes it is impossible to look
into every nook and corner to find
out hidden errors that may create
problems, as many paths will go
untested.
It is difficult to maintain white-box
testing, as it requires specialized
tools like code analyzers and
debugging tools.
The following table lists the advantages and disadvantages of white-box testing.
Grey-Box Testing
Grey-box testing is a technique to test the application with having a limited knowledge of the internal workings of an application. In software testing, the phrase the more you know, the better carries a lot of weight while testing an application.
Mastering the domain of a system always gives the tester an edge over someone with limited domain knowledge. Unlike black-box testing, where the tester only tests the application's user interface; in grey-box testing, the tester has access to design documents and the database. Having this knowledge, a tester can prepare better test data and test scenarios while making a test plan.
The following table lists the advantages and disadvantages of grey-box testing.
Advantages Disadvantages
Offers combined benefits of black-box
and white-box testing wherever
possible.
Grey box testers don't rely on the
source code; instead they rely on
interface definition and functional
specifications.
Based on the limited information
available, a grey-box tester can
design excellent test scenarios
especially around communication
protocols and data type handling.
The test is done from the point of view
of the user and not the designer.
Since the access to source code is not
available, the ability to go over the
code and test coverage is limited.
The tests can be redundant if the
software designer has already run a
test case.
Testing every possible input stream is
unrealistic because it would take an
unreasonable amount of time;
therefore, many program paths will go
untested.
A Comparison of Testing Methods
Black-Box Testing Grey-Box Testing White-Box Testing
The internal workings of an
application need not be known.
The tester has limited knowledge
of the internal workings of the
application.
Tester has full knowledge of the
internal workings of the
application.
Also known as closed-box testing,
data-driven testing, or functional
testing.
Also known as translucent testing,
as the tester has limited
knowledge of the insides of the
application.
Also known as clear-box testing,
structural testing, or code-based
testing.
Performed by end-users and also
by testers and developers.
Performed by end-users and also
by testers and developers.
Normally done by testers and
developers.
Testing is based on external
expectations - Internal behavior of
the application is unknown.
Testing is done on the basis of
high-level database diagrams and
data flow diagrams.
Internal workings are fully known
and the tester can design test data
accordingly.
It is exhaustive and the least time-
consuming.
Partly time-consuming and
exhaustive.
The most exhaustive and time-
consuming type of testing.
Not suited for algorithm testing. Not suited for algorithm testing. Suited for algorithm testing.
This can only be done by trial-and-
error method.
Data domains and internal
boundaries can be tested, if
known.
Data domains and internal
boundaries can be better tested.
White Box Testing Techniques:Statement Coverage - This
technique is aimed at exercising all programming statements with minimal tests.
Branch Coverage - This technique is running a series of tests to ensure that all branches are tested at least once.
Path Coverage - This technique corresponds to testing all possible paths which means that each statement and branch is covered.
Calculating Structural Testing Effectiveness:
Statement Testing = (Number of Statements Exercised / Total Number of Statements) x 100 %
Branch Testing = (Number of decisions outcomes tested / Total Number of decision Outcomes) x 100 %
Path Coverage = (Number paths exercised / Total Number of paths in the program) x 100 %
Black Box Testing Techniques:
Equivalence Class Boundary Value Analysis Domain Tests Orthogonal Arrays Decision Tables State Models Exploratory Testing All-pairs testing
V Model - SDLC
V model, a software development life cycle methodology, describes the activities to be performed and the results that have to be produced during the life cycle of the product.
It is known as verification and validation model
Validation answers the question – "Are we developing the product which attempts all that user needs from this software?"
Verification answers the question– "Are we developing this product by firmly following all design specifications?”
V-Model Objectives
Project Risks Minimization Guaranteed Quality Total Cost reduction of the Entire
Project Improved Communication between
all Parties Involved
V-Model Different Phases
V-Model Phasescontinue…
The Requirements phase, a document describing what the software is required to do after the software is gathered and analyzed and the corresponding test activity is user acceptance testing.
The Architectural Design phase, where a software architecture is designed and building the components within the software and the establishing the relationships between the components and the corresponding test activity is System Testing.
The High Level Design phase, breaking the system into subsystems with identified interfaces; then gets translated to a more detailed design and the corresponding test activity is Integration testing.
The Detailed Design phase, where the detailed implementation of each component is specified. The detailed design broken into Data structures, Algorithm used and the corresponding test activity is unit Testing.
Coding in which each component of the software is coded and tested to verify if faithfully implements the detailed design.
Advantages and Limitations of V-Model
Advantages: Emphasize for verification and validation of the
product in early stages of product development. Each stage is testable Project management can track progress by
milestones Easy to understand implement and use
Limitations: Does not easily handle events concurrently. Does not handle iterations or phases Does not easily handle dynamic changes in
requirements Does not contain risk analysis or Mitigation activities
Software Testing - Validation Testing
The process of evaluating software during the development process or at the end of the development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be defined as to demonstrate that the product fulfills its intended use when deployed on appropriate environment.
It answers to the question, Are we building the right product?
Validation Testing - Workflow
Validation testing can be best demonstrated using V-Model. The Software/product under test is evaluated during this type of testing.
Activities: Unit Testing Integration Testing System Testing User Acceptance Testing
What is Verification Testing ?
Verification is the process of evaluating work-products of a development phase to determine whether they meet the specified requirements.
verification ensures that the product is built according to the requirements and design specifications. It also answers to the question, Are we building the product right?
Verification Testing - Workflow
verification testing can be best demonstrated using V-Model. The artefacts such as test Plans, requirement specification, design, code and test cases are evaluated.
Activities: Reviews Walkthroughs Inspection
Test-Driven Development (TDD)
Test-driven development starts with developing test for each one of the features. The test might fail as the tests are developed even before the development. Development team then develops and refactors the code to pass the test.
Test-driven development is related to the test-first programming evolved as part of extreme programming concepts.
Test-Driven Development Process
Add a Test Run all tests and
see if the new one fails
Write some code Run tests and
Refactor code Repeat
Context of Testing
Valid inputs Invalid inputs Errors, exceptions, and events Boundary conditions Everything that might break
Benefits of TDD
Much less debug time Code proven to meet requirements Tests become Safety Net Near zero defects Shorter development cycles
What is Test Driver?
Test Drivers are used during Bottom-up integration testing in order to simulate the behavior of the upper level modules that are not yet integrated.
Test Drivers are the modules that act as temporary replacement for a calling module and give the same output as that of the actual product.
Drivers are also used when the software needs to interact with an external system and are usually complex than stubs.
Driver - Flow Diagram
Driver - Flow Diagram continue…
The above diagrams clearly states that Modules 4, 5, 6 and 7 are unavailable for integration, whereas, above modules are still under development that cannot be integrated at this point of time. Hence, drivers are used to test the modules. The order of Integration will be:4,25,26,37,32,13,1
Testing Approach :+ Firstly, the integration between the modules 4,5,6 and 7+ Test the integration between the module 4 and 5 with Driver 2+ Test the integration between the module 6 and 7 with Driver 3
What is Test Environment?
Test Environment consists of elements that support test execution with software, hardware and network configured. Test environment configuration must mimic the production environment in order to uncover any environment/configuration related issues.
Factors for designing Test Environment: Determine if test environment needs archiving in order
to take back ups. Verify the network configuration. Identify the required server operating system, databases
and other components. Identify the number of license required by the test team.
Environmental Configuration
It is the combination of hardware and software environment on which the tests will be executed. It includes hardware configuration, operating system settings, software configuration, test terminals and other support to perform the test.
Example:A typical Environmental Configuration for a web-based application is given below:Web Server - IIS/ApacheDatabase - MS SQLOS - Windows/ LinuxBrowser - IE/FireFoxJava version : version 6
What is Test Execution?
Test execution is the process of executing the code and comparing the expected and actual results. Following factors are to be considered for a test execution process: Based on a risk, select a subset of test suite to be
executed for this cycle. Assign the test cases in each test suite to testers for
execution. Execute tests, report bugs, and capture test status
continuously. Resolve blocking issues as they arise. Report status, adjust assignments, and reconsider plans
and priorities daily. Report test cycle findings and status.
Examples
Identifying test cases
(*use template from previous slide*)
At design stage: From Use-cases From sequence diagrams (alternative
flows) From activity diagrams
Use cases tell the customer what to expect, the developer what to code, the technical writer what to document, and the tester what to test
Although few actually do it, developers can begin creating test cases as soon as use cases are available, well before any code is written.
Textual Description for the University Course Registration Use-Case Basic Flow of Events
Basic FlowRegister For Courses1. Logon This use case starts when a Student accesses the Wylie
University Web site. The system asks for, and the Student enters, the student ID and password.
2. Select 'Create a Schedule' The system displays the functions available to the student. The student selects "Create a Schedule."
3. Obtain Course Information The system retrieves a list of available course offerings from the Course Catalog System and displays the list to the Student.
4. Select Courses The Student selects four primary course offerings and two alternate course offerings from the list of available course offerings.
5. Submit Schedule The student indicates that the schedule is complete. For each selected course offering on the schedule, the system verifies that the Student has the necessary prerequisites.
6. Display Completed Schedule The system displays the schedule containing the selected course offerings for the Student and the confirmation number for the schedule.
Textual Description for University Course Registration Use-Case Alternate Flows
Alternate FlowsRegister For Courses1. Unidentified Student In Step 1 of the Basic Flow, Logon, if the system
determines that the student ID and/or password is not valid, an error message is displayed.
2. Quit The Course Registration System allows the student to quit at any time during the use case. The Student may choose to save a partial schedule before quitting. All courses that are not marked as "enrolled in" are marked as "selected" in the schedule. The schedule is saved in the system. The use case ends.
3. Unfulfilled Prerequisites, Course Full, or Schedule ConflictsIn Step 5 of the Basic Flow, Submit Schedule, if the system determines that
prerequisites for a selected course are not satisfied, that the course is full, or that there are schedule conflicts, the system will not enroll the student in the course. A message is displayed that the student can select a different course. The use case continues at Step 4, Select Courses, in the basic flow.
4. Course Catalog System UnavailableIn Step 3 of the Basic Flow, Obtain Course Information, if the system is down, a
message is displayed and the use case ends.5. Course Registration Closed If, when the use case starts, it is determined that
registration has been closed, a message is displayed, and the use case ends.
A three-step process for generating test cases from a fully-detailed use case:
1. For each use case, generate a full set of use-case scenarios.
2. For each scenario, identify at least one test case and the conditions that will make it "execute."
3. For each test case, identify the data values with which to test
E.g. username should be of at least 8 chars, should not contain space, no special chars, at least 1 digit.
Test case : username validation1. Enter un less than 8 chars – value=abcd2. Enter un of exactly 8 chars- value=abcdefgh3. Enter un greater than 8 chars- value=abcdefghi4. Enter un greater than 8 chars- value=abcdefghi5. Enter un having space chars- value=abcd efghi6. Enter un having special chars- value=abcd&fghi
Steps Expected result
1. Enter un less than 8 chars – value=abcd
2. Enter un of exactly 8 chars- value=abcdefgh
3. Enter un greater than 8 chars- value=abcdefghi
4. Enter un greater than 8 chars- value=abcdefghi
5. Enter un having space chars- value=abcd efghi
6. Enter un having special chars- value=abcd&fghi
Display Msg “ Username should be at least 8 chars”
System Accepts Username
System Accepts Username
System Accepts Username
Display Msg “ Username should not contain space chars”
Display Msg “ Username should not contain special chars”
Write test cases for ‘login window’ Test case 1-valid_un_valid_pw Test case 2-invalid_un_valid_pw Test case 3-valid_un_invalid_pw
Write steps & expected result for all test cases
Test case 1-valid_un_valid_pw Precondition : Log In page of the system is open& focus is on
User Name text field
Post condition: First.html page opens
Step no
Steps Expected result
1 Enter valid user name in username text field & press tab
Focus on password text field
2 Enter valid password in password text field
Password displayed in asterisks or dots (*** / ….)
3 Click ‘sign in’ button or press enter First.html page opens
Test case 2-invalid_un_valid_pw Precondition : Log In page of the system is open& focus is on
User Name text field
Post condition: Error message is displayed & Log in page displayed
Step no
Steps Expected result
1 Enter invalid user name in username text field & press tab
Focus on password text field
2 Enter valid password in password text field
Password displayed in asterisks or dots (*** / ….)
3 Click ‘sign in’ button or press enter Display error message “Un or PW is invalid!”
Exp No. 8
Junit is for Unit testing ( black box/white box) Write MyTest.java for your code (mini project) Pass expected & actual results to
asserEquals() method Debug MyTest.java as Junit----green color indicates test passed----red color indicates test failed Write separate test file for every test case &
create a test suite for these test cases Refer https://www.youtube.com/watch?
v=tkzJsP7NP54
top related