testing tools for beginners

132

Upload: jai-lingala

Post on 01-Nov-2014

131 views

Category:

Documents


6 download

DESCRIPTION

testing tools for Beginners

TRANSCRIPT

Page 1: testing tools for Beginners
Page 2: testing tools for Beginners

Testing Tools

Software Quality:

Technical: Meeting Customer Requirements Meeting Customer Expectations (User friendly, Performance, Privacy)

Non-Technical:

Cost of Product

Time to Market

Software Quality Assurance:

To monitor and measure the strength of development process, Organisation follows SQA concepts.

Software Project:

Software related problems solved by software engineers through a software engineering process.

Life Cycle Development (LCD):

Information Gathering

Analysis

Design

Coding

Testing

MaintenanceTesting:Verification & Validation of software is called testing.

Page 3: testing tools for Beginners

S/W RS(FRS + SRS)

HLD

LLD’s

SystemTesting

Black Box Testing

Fish Model of Software Development:

LCD Analysis Design Coding Maintenance

Information Gathering(BRS)

LCT Reviews Reviews White Box

Test S/W Change Prototype

Testing

Verification Validation

Business Requirement Specification (BRS-Information Gathering):

BRS defines requirements of the customer to be developed as a software. This type of documents developed by business analyst category people.

Software Requirement Specification (S/W RS):

This document defines w.r.t BRS. This document consists of functional Requirements to develop (FRS) & System Requirements to use (SRS). This document also developed by business analyst people only.

Reviews:

It is a static testing technique to estimate completeness and correctness of a document.

Design

High Level Design Document (HLD):

This document is also known as external design. This document defines hierarchy of all possible functionality’s as modules.

Low Level Design Documents (LLD’s):

This document is also known as internal design. This document defines structural logic of every sub module.

Programs

Page 4: testing tools for Beginners

Example:

DFD-Data Flow Diagram, E-R Diagram, Class Diagram, Object Diagram.Prototype:

Sample model of an application with out functionality is called Prototype.Ex: Power point slide show.

Coding:

White Box Testing:

It is a coding level testing technique. During this test, test engineers verifies completeness and correctness of every program.This testing is also known as Glass Box Testing or Clear Box Testing.

System Testing:

Block Box Testing:

It is a build level testing technique. During this tests, testing team validates internal functionality depends on external inter phase.

V – Model of S/W Development:

V – Stands for Verification & Validation. This model defines mapping between development stages & Testing Stages.

Development Testing

Development Plan I/f gathering & Analysis

--Assessment of Development plan -- Prepare Test Plane -- Requirements phase testing

Design&

Coding

InstallBUILD

-- Design Phase Testing

-- Program Phase Testing (WB)

-- Functional & System Testing(BB)

-- User Acceptance Testing

-- Test Documentation -- Port Testing

-- Test S/W Changes

-- Test Efficiency

DRE = A / (A+B)

Maintenance

Page 5: testing tools for Beginners

Defect Removal Efficiency (DRE):

It also known as Defect Deficiency.

DRE = A / (A+B)WhereA = No of defects found by testing teem during testing

process.B = No of defects found by customer during Maintenance.

Refinement Form of V – Model:

For medium scale and small-scale organisations V – Model is expensive to follow. For this type of organisations, some refinements required in V – Model to develop quality Software.

From the above refinement form of V-Model, Small & Medium scale organisations are maintaining separate testing team for functional & System testing stage to decrease cost of testing.

I) Reviews During Analysis: In general, software development process starts with information gathering and analysis. In this stage business analyst category people are developing BRS and S/W RS like documents. BRS defines requirements of the customer & S/W RS defines functional requirements to be developed and system requirements to be used.

After completion of this type of documents preparation, they are conducting reviews on the documents for Completeness & Correctness.In this review analysts are using below factors:

Are they complete? Are they met requirements? Are they achievable? (w.r.t Technology) Are they reasonable? (w.r.t time & cost) Are they testable?

Coding

BRS / URS / CRS User Acceptance Testing

S/W RS Function & System Testing (BB Testing)

HLD

LLD’sUnit Testing

Integration Testing

Reviews

Reviews White Box Testing

Page 6: testing tools for Beginners

Change Change

II) Reviews During Design:

After completion of analysis and their reviews, our project level design will start logical design of application in terms of external & internal design (HLD, LLD’s).

In this stage they are conducting reviews for completeness and correctness of design documents. In this review they are using below factors.

Are they understandable? Are they met right requirements? Are they complete? Are they followable? Does they handle Errors?

III) UNIT TESTING:

After completion of design & their reviews, programmers will start coding to physically convert software. During this coding stage programmers are conducting unit testing through a set of White Box Testing Techniques.

This unit testing is also known as Module Testing or Component testing or Program Testing or Micro Testing.

There are three possible White Box Techniques.

1. Execution Testing:

Basis paths coverage (Execution of all possible blocks in a program). Loops Coverage (Termination of loops statements). Program technique coverage (Less no of memory cycles & CPU cycles).

2. Operations Testing:

Run on customer expected platforms (OS, Browser, Compiler etc.).

3. Mutation Testing:

It means that a change in program. White Box Testers are performing this change in program to estimate test coverage on the program.

Tests Retests Tests Retests

Pass Fail Pass Pass(Complete Testing) (Incomplete Testing)

IV) Integration Testing:

After completion of dependent modules development and testing, programmers are combined them to form a system. In this Integration, they are conducting Integration testing on the coupled modules w.r.t. HLD.

There are three approaches to conduct integration testing.

Page 7: testing tools for Beginners

Main

Sub 1

Sub 2

Driver

Main

Sub 1

Sub 2

Driver

Sub 3

Stub

Main

Sub 1 Sub 2

Stub

1. Top – Down Approach:

Conduct testing on main module with out coming to some of the sub modules is called Top-Down Approach.

From the above model, stub is a temporary program instead of under construction sub module. It is also known as called program.

2. Bottom – Up Approach:

Conduct testing on sub modules with out coming from main module is called Bottom – Up Approach.From the above model, Driver is a temporary program instead of main module. This program is also known as calling program.

3. Sandwich Approach:

The combination of Top – Down and Bottom-UP approaches is called Sandwich Approach.

BUILD:

A finally intigrated all modules set .EXE form is called Build.

V) Functional & System Testing:

After compleation of final integration of modules as a system, test engineers are planning to conduct Functional & System Testing through Black Box Testing Technique.

Page 8: testing tools for Beginners

Core Level

Advanced Level

Thease techniques classified in to four Catageries.

1. Usability Testing2. Functionality Testing3. Performance Testing4. Security Testing

During Usability Testing, testing team validates User Friendliness of screens.During Functionality Testing, testing team validates Correctness of Customer Requirements.During Performance Testing, testing team estimates Speed of Processing. During Security Testing, testing team validates Privacy to User Operations.

1. Usability Testing:

In general testing team starts test execution with Usability testing. During this test, testing team validates User Friendliness of screens of build.

During Usability Testing, testing teams are applying two types of sub tests.

a) User Interface Test (UI):

Ease of use ( Understandable Screens) Look & Feel ( Attractive or pleasantness) Speed Interface ( Less no of events to complete a task)

b) Manuals Support testing:

Context sensitiveness of user manuals.

Receive Build from developers

UI TestingUsabilityTesting

Remaining System Tests

Manuals Support Test

2) Functional Testing:

A major part of BB testing is Functional Testing. During this test testing team concentrate on Meet Customer Requirements. This functional testing classified into below tests.

a) Functionality or Requirements Testing:

During this test, test engineer validates Corrections of every functionality in terms of below coverage’s.

Behavioral coverage ( Changes in object properties ) Input(i/p) Domine coverage ( Size and type of every input object) Error-Handling coverage ( Preventing –ve navigation’s ) Calculations Coverage ( Correctness of outputs )

Page 9: testing tools for Beginners

Valid Invalid a to z 0 to 9

A to ZSpecial charactersBlank

Valid Invalid a to z

A to Z0 to 9Special charactersBlank

Backend coverage ( Impact of front-end operations on backend tables content) Service level coverage ( order of functionality’s)

b) Input Domine Testing:

It is a part of Functionality Testing. Test engineers are maintaining special structures to define size and type of every input object.

Boundary Value Analysis BVA( Range / Size ):

Min -- Pass Min – 1 -- FailMin + 1 -- PassMax -- PassMax - 1 -- PassMax + 1 -- Fail

Equivalence Class Partitions ECP (Type):

Valid Invalid

Pass Fail

Example1: A login process allows user ID and Password to validate users. User ID allows Alpha Numerics in lower case from 4 to 16 characters long. Password allows alpha bits in lower case 4 to 8 characters long. Prepare BVA and ECP for user ID and password.

User ID

BVA ECP4 – pass3 – fail5 – pass16 – pass15 – pass17 - Fail

PasswordBVA ECP4 – pass3 – fail5 – pass8 – pass7 – pass9 - Fail

Example 2:Prepare BVA & ECP for the following text box.A text box allows 12 digit numbers along with * as mandatory and sometimes it allows – also.

Page 10: testing tools for Beginners

Valid Invalid 0 to 9 with *0 to 9 with *, -

A to Za to z0 to 9Special characters other than *, -Blank

BVA ECPMin = Max = 12 – pass

11 – fail13 - Fail

c) Recovery Testing:

It is also known as reliability testing. During this test, test engineers validates whether the application change from abnormal state to normal state.

d) Compatibility Testing:

It is also known as portability Testing. During this test, testing team validates whether our application build run on customer expected platform (OS, Compiler, Browser and other system software) are not?

Forward Capability Backward Capability

Note: During testing, test engineers are finding Backward capability defects as maximum.

e) Configuration Testing:

It is also known as hardware compatibility testing. During this test, testing team validates whether our application build supports different technology hardware devices are not?

EX: Different types of LANs, different topologies, different technology printer’s etc.

f) Inter System Testing:

It is also known as end to end testing. During this test, testing team validates whether our application build co-existence with other existing software’s are not?

Normal

Abnormal

UsingBackup & RecoveryNormal

Build OSBuild OS

Page 11: testing tools for Beginners

To share resources

g) Installation Testing:

During this test, testing team validates installation of our application build along with supported software’s into customer site like configured systems. During this test, testing team observe below factors:

Setup program execution to start installation. Easy interface during installation. Occupy disk space after installation.

h) Parallel Testing:

It is also known as comparative testing and applicable to software products only. During this test, testing team compare our application build with competitors products in the market.

i) Sanitation Testing:

It is also known as garbage testing. During this test, testing team try to find extra features in our application build w.r.t customer requirements.

Defect:

During testing, testing team reports defects to developers in terms of bellow categories.

Mismatch between expected and actual. Missing functionality. Extra functionality w.r.t CRS.

When defects are accepted by development team to solve, they called defects as BUG’s.Some times defects are known as issues. Defects raise in application due to errors in coding

3) Performance Testing:

It is an advanced testing technique and expensive to apply because testing team have to create huge environment to conduct this testing. During this test, testing team validates Speed of Processing. During this performance testing, testing team conduct below sub tests.

a) Load Testing:

The execution of our application under customer expected configuration and customer expected load to estimate performance is called Load Testing.

EBD

WBA

TBA

ITA

Local DB

New Application

Server

New Server

Page 12: testing tools for Beginners

Threshold Point

Performance

Resources

Clint Server

b) Stress Testing:

Execution of our application under customer expected configuration and uninterval load’s to estimate performance is called stress testing.

c) Storage Testing:

The execution of application under huge amounts of resources to estimate storage limitations is called Storage Testing.

Break Event Analysis

EX: MS-Access 2 GB database as maximum.

d) Data Volume Testing:

The execution of our application under customer expected configuration to estimate peak limits of data is called data volume testing.

4) Security Testing:

It is also an advanced testing technique and complex to conduct. During this security testing, testing team validates Privacy to User Operations. During this test, testing team applies below sub tests.

a) Authorization (Whether user is Authorised are not )b) Access Control (Whether valid user have permission to specific service or not)c) Encryption/Decryption (Data conversion in between Clint process and Server process)

Note: In small and medium scale organisations, test engineers are covering Authorization and Access Control during functional testing. Encryption and decryption process covered by development people.

VI) User Acceptance Testing (UAT):

Decryption Encryption Decryption Encryption

Page 13: testing tools for Beginners

After completion of Functional & System testing, organization invites customer site people to collect feedback. There are two methods to conduct UAT such as test and test.

TEST TEST Software applications By real customers In development site

Software products By customer site like people In customer site like environments

Collect Feed Back

VII) Testing During Maintenance:

After completion of User Acceptance Test and their modifications, management concentrates on release team formation. This team consists of few developers, few testing & hardware engineers.

This team conducts Port Testing In Customer Site. During this test, release team validates below factors.

Compact Installation Overall functionality I/P Devices handling O/P Devices handling OS error handling Secondary storage handling Coexistence with other software

After completion of port testing, release team provides training sessions to customer site people and comes back.

During software maintenance customer site people are sending Change request (CR) to the organization.

Testing Terminology:

Change Request (CR)

Enhancement Missed Defect

Impact Analysis Impact Analysis

Perform Change Perform Change

Test Software Change Change Test Process

CCB(Change ControlBoard)

Page 14: testing tools for Beginners

1. Monkey Testing / Chimpanzee Testing:

A tester conducts any test on basic functionality’s of application.

2. Exploratory Testing:

Level by level of functionality’s coverage is called exploratory testing.

3. Sanity Testing:

It is also known as Tester Acceptance Testing (TAT) or Build Verification Test (BVT).

After receiving build from development team, testing team estimates stability of that build before starts testing.

4. Smoke Testing:

It is an extra shakeup in sanity process. In this test, tester try to trouble shoots when that build is not working before start testing.

5. Big Bang Testing:(Informal Testing - Single Stage)

A testing team conducts single stage testing, after completion of entire system development instead of multiple stages.

6. Incremental Testing:

A multiple stages of testing process from unit level to system level is called incremental testing. It is also known as formal testing.

7. Manual Vs Automation:

A tester conducts any test on application build without using any Testing tool / Software is called manual testing.

A tester conducts a test on application build with the help of Testing tool / Software is called Automation testing.

In common testing process, test engineers are using test Automation w.r.t test Impact and Criticality. Impact means that test repetition & Criticality means that complex to apply test manually. Due to these two reasons, testing people are using test Automation.

8. Re-Testing :

The re-execution of a test with multiple test data to validate a function is called Re-Testing.Ex: To validate multiplication, test engineers use different combination of inputs in terms of Minimum, Maximum, Integer, Float, +ve and –ve ect.

9. Regression Testing:

Page 15: testing tools for Beginners

The re-execution of tests on modified build to ensure bug fix work and occurrences of side effects called Regression Testing (Previously failed test and previously related passed tests).Note:

1) Re-Testing on same build & regression testing on modified build but both are indicating re-execution.

2) From the definitions of Re-Testing and Regression Testing, test repetition is mandatory in test engineer job. Due to this reason test engineers are concentrating on test Automation.

10. Error, Defect and Bug:

A mistake in code is called Error. Due to errors in coding, test engineers are getting mismatches in application called defects. If defected accepted by development to solve called Bug.

Page 16: testing tools for Beginners

Learning

Record Script

Edit Script

Run Script

Analyze Script

WINRUNNER – 7.0

Developed by Mercury Interactive Functionality Testing Tool Supports Client / Server, Web applications ( V.B, VC++ , Java, Power Builder, D2K, Delphi,

HTML and Siebell) To support .NET, SAP, People Soft, Oracle applications and multimedia we can use Quick Test

Professional(QTP)

TEST PROCESS:

1. Learning:

Recognition of objets and windows in application by WinRunner is called learning. WinRunner 7.0 supports auto leaning.

2. Record Script:

Test engineer creates automated test script to record our business operations. WinRunner record manual test operations in TSL (Test Script Language) like as “C”.

3. Edit Script:

Test engineers are inserting required check points into the record script.

4. Run Script:

During test execution, test engineers run the script instead of manual testing.

5. Analyze Results:

During automation script execution on application build, WinRunner returns results in terms of passed & failed. Depends on that results, test engineers are concentrating on defect tracking.

Page 17: testing tools for Beginners

U I D

P W D

Note: WinRunner only run on windows family operating systems. If we want to conduct functionality testing on application build in Unix, Linex platform, we can use Xrunner

CASE STUDY:Login

Expected: OK button enabled after filling UID & PWD.

Manual Process:

Focus to login

OK DisabledEnter UID

OK DisabledEnter PWD

OK Enabled

Automation Process:

set_window(“login”, 5);

button_check_info(“OK”, “enabled”, 0);

edit_set(“UID”, “xxxx”);

button_check_info(“OK”, “enabled”, 0);

password_edit_set(“PWD”, “encrypted PWD”);

button_check_info(“OK”, “enabled”, 1);

button_press(“OK”);

Test Script :

An automated manual test program is called test script. This program consists of two types of statements such as Navigational statements to operate project and Check points to conduct testing.Add-In Manager (Window):

It lists out all possible supported technologies by WinRunner to conduct testing.

OK

Page 18: testing tools for Beginners

WinRunner Icons:

1. Start Recording

2. Run from top

3. Run from point.

4. Pause

Recording Modes:

WinRunner records manual operations in two types of modes such as Context Sensitive Mode and Analog Mode.

a) Context Sensitive Mode:

In this mode WinRunner records mouse and keyboard operations w.r.t objects and windows in application build. It is a default mode in WinRunner.

Focus to window set_window(“window name”, time to focus);

Click push button button_press(“button name”);

Fill edit box edit_set(“text box”, “typed text”);

Fill password password_edit_set(“password”, “encrypted password”);

Select item in list list_select_item(“list box name”, “item”);

Selection option in menu menu_select_item(“menu name; option name”);

Radio button button_set(“radio button name”, ON/OFF);

Check box button_set(“check box name”, ON/OFF);

Note: TSL is a case sensitive language and it allows entire scripting in lower case but maintains Flags in upper case.

b) Analog Mode:

To record mouse pointer movements w.r.t desktop coordinates. We can use this mode in WinRunner.

EX: Digital Signature, Graphs drawing and image movements.

To select Analog mode recording in WinRunner bellow possibilities can be used.

Click Start Recording Twice

Create menu Record AnalogNote :

Page 19: testing tools for Beginners

1. In analog mode WinRunner records mouse pointer movements on the desktop w.r.t desktop coordinates. Due to this reason test engineer maintains corresponding window in default position during recording and running.

2. Test engineer also maintains monitor resolution as constant during recording and testing.3. WinRunner maintains F2 as a shortkey to change from one mode to other.

Analog Recording:

In Analog mode WinRunner maintains bellow “ TSL” statements.

1. move_locator_track( ); :

WinRunner use this function to record mouse pointer movements on the desktop in one unit (one sec) of time. Syntax:

move_locator_track(Track No);

By default track no starts with 1.

2. mtype( ); :

WinRunner use this function to record mouse operation on the desktop.

Syntax:mtype(“<T track no><kleft/kright>+/-“);

Ttrack no where mouse operation on desktop+/- Release/Hold key

3. type( ); :

WinRunner use this function to record keyboard operations in Analog mode.

Syntax:type(“Typed text” / “ASCII notation”);

CHECK POINTS:

After completion of required navigation recording, test engineers are insisting check points into the script to cover below sub tests.

1. Behavioral Coverage2. I/P Domine coverage3. Error handling coverage4. Calculation coverage5. Backend coverage6. Service levels coverage

To automate above sub tests, we can use four types of checkpoints in WinRunner.

1. GUI check point2. Bigmap Check point

Page 20: testing tools for Beginners

3. Database check point.4. Text Check point.

a) GUI Check Point:

To test properties of objects, we can use this checkpoint in WinRunner. This checkpoint consists of 3 sub options

a) For Single propertyb) For Object / Windowc) For Multiple Objects

a) For Single Property:

To verify one property of one object, we can use this option.

Example:

Object: Update

Focus to window DisabledOpen a record DisabledPerform a Change Enabled

Navigation:

Select position in script Create menu GUI check point For single property Select testable object select required property with expected Click Paste.

Test Script

set_window("Flight Reservation", 4);

button_check_info("Update Order","enabled",0);

menu_select_item ("File;Open Order...");

set_window ("Open Order", 1);

button_set ("Order No.", ON);

edit_set ("Edit", "1");

button_press ("OK");

set_window ("Flight Reservation", 7);

button_check_info("Update Order","enabled",0);

button_set ("Business", ON);

button_check_info("Update Order","enabled",1);

button_press ("Update Order");

Example :

Page 21: testing tools for Beginners

Input

Roll No

Name

OK

Expected:Focus to window Roll No focused OK disabledSelect Roll NoName focused OK disabledEnter Name OK enabled

Sample

Script

set_window (“ sample ” , 4);

edit_check_info (“input” , “focused”,1);

button_check_info (“ OK ” , “ enabled ”,0);

edit_set (“ input ”, “ XXXX ”);

button_check_info (“ OK ” , “ enabled ”,1);

button_press (“ OK ”);

Example 3:

Student

Script

set_window ( “ Student ” , 5 );

edit_check_info ( “Roll NO ” , “ focused ” , 1);

button_check_info ( “ OK ” , “ enabled ” , 0);

list_select_item ( “ Roll No ” , “ XXXX ”);

edit_check_info (“ Name ” , “ focused ”, 1);

button_check_info (“ OK ” , “ enabled ”, 0);

edit_set ( “ Name ” , “ XXXX ”);

button_check_info ( “ OK ” , “ enabled ” , 1);

button_press ( “ OK ” );

Case Study:

Object Type Testable Properties

OK

Expecting:Focus to window input is focused OK disabledFill input OK enabledCreate script.

Page 22: testing tools for Beginners

Fly From

Fly TO

Display

Push Button Enabled, Focused

Radio Button Enabled Status

Check Box Enabled Status

List / Combo Box Enabled, Focused, Count, Value.

Menu Enable, Count

Table Grid Rows Count, Column Count, Table Content

Edit Box / Text BoxEnabled, Focused, Value, Range, Regular

Expression, Date Format, Time format.

Example 4:

Journey

set_window(“Journey”, 5);

list_select_item(“Fly From”, “xxxx”);

list_get_info(“Fly From”, “count”, n);

list_check_info(“Fly To”, “count”, n-1);

Example 5: Sample 1 Sample 2

Expected :Selected item in list box is equal to text box value when you click display.

set_window(“Sample 1”, 5);

list_select_item(“Item”, “xxxx”);

list_get_info(“Item”, “value”, x);

button_press(“OK”);

set_window(“Sample 2”, 5);

button_press(“Display”);

edit_check_info(“Text”, “value”, x);

List

Text Box

OK

▼ ▼Expected:No of items in Fly To equal to No of items in Fly From-1, when you select 1 item in Fly From.

Page 23: testing tools for Beginners

Roll No

Percentage

Grade

OK

Example 6:

Student

Expected :

If % is > 80, Than grade AIf % is < 80 & > 70 Than grade BIf % is < 70 & > 60 Than grade COther wise Grade is D.

set_window(“Student”, 5);

list_select_item(“Roll No”, “xxx”);

button_press(“OK”);

edit_get_info(“Percentage”, “value”, P);

if (P > = 80)

edit_check_info(“grade”, “value”, “A”);

else if (P < 80 && P > = 70)

edit_check_info(“grade”, “value”, “B”);

else if (P < 70 && P > = 60)

edit_check_info(“grade”, “value”, “C”);

else

edit_check_info(“grade”, “value”, “D”);

Page 24: testing tools for Beginners

Type

Age

Gender

Qualification

Example 7:Insurance

Expected :If type is “A” Age is focusedIf type is “B” Gender is focusedAny other type Qualification is focused

set_window(“Insurance”, 5);

list_select_item(“Type”, “X”);

list_get_info(“Type”, “Value”, x);

if (x = = A)

edit_check_info(“Age”, “ focused ”, 1);

else if (x = = B)

list_check_info(“Gender”, “ focused ”, 1);

else

list_check_info(“Qualification”, “ focused ”, 1);

a) For Object / Window:

To test more than one properties of single object, we can use this option.

Example 8:

Object : Update Order

Focus to window disabledOpen record disabledPerform Change enabled, focused

Navigation:

Select Position Script Create Menu GUI check point For Object or Window Select testable object(Double Click) Select required property with expected click OK.

Syntax:obj_check_gui(“object name”, “checklist file.ckl”, “expected values file”, Time to create)

In the above syntax checklist file specifies list of properties to be tested. Expected values file specifies expected values for those properties. This two files created by WinRunner during checkpoint creation.

set_window ("Flight Reservation", 3);

obj_check_gui("Update Order", "list2.ckl", "gui2", 1);

menu_select_item ("File;Open Order...");

Page 25: testing tools for Beginners

set_window ("Open Order", 1);

button_set ("Order No.", ON);

edit_set ("Edit", "1");

button_press ("OK");

set_window ("Flight Reservation", 3);

obj_check_gui("Update Order", "list4.ckl", "gui4", 1);

button_set ("First", ON);

obj_check_gui("Update Order", "list5.ckl", "gui5", 1);

button_press ("Update Order");

For Multiple Objects:

To verify more than one properties of more than one objects, we are using this checkpoint in WinRunner.

Example 9:Objects Insert Order Update Order Delete OrderFocus to window Disabled Disabled DisabledOpen a record Disabled Disabled EnabledPerform change Disabled Enabled, Focused Enabled

Navigation:

Select position in script create menu GUI check point for multiple objects click add select testable objects right click to quit selected require properties with expected for every object click OK.

Syntax:win_check_gui(“window”, “checklist file.ckl”, “expected values file”, Time to create)

set_window ("Flight Reservation", 2);

win_check_gui("Flight Reservation", "list4.ckl", "gui4", 1);

menu_select_item ("File;Open Order...");

set_window ("Open Order", 1);

button_set ("Order No.", ON);

edit_set ("Edit", "1");

button_press ("OK");

set_window ("Flight Reservation", 2);

win_check_gui("Flight Reservation", "list2.ckl", "gui2", 1);

button_set ("Business", ON);

Page 26: testing tools for Beginners

Age

Name

win_check_gui("Flight Reservation", "list3.ckl", "gui3", 1);

button_press ("Update Order");

Example 10:

SampleExpected:Range 16 to 80 years

Create menu gui check point for object or window select age object select range property enter from & to values click OK.

set_window(“sample”, 5);

obj_check_gui(“Age”, “list1.ckl”, “gui1”, 1);

Example 11:Sample

Expected : Alphabets in lower case

Navigation:

Create menu gui check point for obj/win select name obj select regular expression enter expected expression ( [a-z]*) click OK.

set_window(“sample”, 1);obj_check_gui(“name”, “list1.ckl”, “gui1”, 1);

list1.ckl regular expressiongui1 [q-z]*

Example 12:

Name object is taking alphabets

Regular expression [a-zA-Z]*

Example 13:

Name object is taking alphanumerics, but first character is alphabet

[a-zA-Z] [a-zA-Z0-9]*

Example 14:

Name object is taking alphabets only but it allows “_” as middle. [a-zA-Z] [a-zA-Z_]* [a-zA-Z]Example 15:

Page 27: testing tools for Beginners

Regular expression for yahoo user ID.

Example 16:

Name object allows alphabets in lower case and that value starts with R and end with O.

[R][a-z]*[O]

Example 17:

Prepare Regular expression for the following text box. A text box allows 12 digit numbers along with * as mandatory and sometimes it allows – also.

[[0-9][*]]*

Editing Check Points:

During test execution test engineers are getting test results in terms of passed & failed. These results analyzed by test engineers before concentrating on defect tracking along with developers. In this review test engineers are performing changes in checkpoints due to their mistakes or changes in project requirements.

a) Changes in expected values:

Due to test engineer mistake or requirement change, test engineers perform changes in expected values through below navigation.

Navigation:

Run script Open result Change expected value Re-execute test to get correct results.

b) Add extra properties:

Some times test engineers are adding extra properties to existing checkpoints due to tester mistake or requirement enhancements.

Navigation:

Create menu Edit GUI check list select checklist file name click OK select new properties to test click OK Click OK to overwrite click OK after reading suggestion Change run mode to update click run run in verify mode to get results open the result analyze the result and perform changes required.

2. Bitmap Check Point:

Page 28: testing tools for Beginners

.

$

To validate static images in our application build, test engineers are using this checkpoint.Ex: Logo’s testing, Graphs comparison, Signature comparison etc..

This Check point consists of two sub options:

a) For Object or Windowb) For Screen Area.

a) For Object or Window:

To compare our expected image with actual image in our application build, we can use this option.

Example1:dd New

Expected Actual

= = Pass! = fail

Example2:

= = Fail! = pass

Navigation:

Create menu Bitmap checkpoint for object or window selected expected image (double Click).

Syntax:obj_check_bitmap(“Image object name”, “Image file name.bmp”, Time to create);

b) For Screen Area:To compare our expected image area with actual, we can use this option.

.

$

10000

5000

0No of items = 10000

Expected

10000

5000

0No of items = 10005

Actual

Page 29: testing tools for Beginners

Front End

Database Check Point Wizard

x x x

D S N

2 Select

1

3

Excel Sheet

Navigation:

Create menu bitmap checkpoint for screen area select required image region right click to release.

Syntax:obj_check_bitmap(“Image object name”, “Image file name.bmp”, Time to create, x, y, width,

height);Note:

1) TSL functions supports variable size of parameters to call like as “c” language.(No functional overloading)ARITY – no of arguments in a function.

2) In functionality test automation GUI checkpoint is mandatory but bitmap check point is optional because all applications doesn’t allows images as contents.

3) Database Check Point:

Back end testing is a part of functionality testing. It is also known as Database Testing.During testing test engineers are validating impact of front-end operations on back end tables content in terms of data validation and data integrity. Data validation means that weather the front-end side values are correctly storing into back end tables are not. Data Integrity means that weather the impact of front end operations working on back end table contents (Updating / Deletion ).

To automate above backend testing using WinRunner, test engineers are following database checkpoint concept in create menu.

In this backend test automation, test engineers are collecting this information from development team.

DSN(Data Source Name) Tables definitions D D D (Database Design Document) Forms Vs Tables

Depending on above information, test engineers are using database checkpoint in WinRunner to automate back end testing.

Step 1: Connect to DatabaseStep 2: Execute Select StatementStep 3: Provide results into Excel Sheet to analyze.

Database checkpoint consists of three sub options such as

Page 30: testing tools for Beginners

X

Y

a b10

20 2030

40 50

DSN

a) Default Checkb) Custom Checkc) Run Time Record Check

a) Default Check:

Test Engineers are conducting back end testing depending upon database table’s contents using this checkpoint.

Create database checkpoint (Current Content of tables as expected) Perform Insert/ Delete / Update through front end. = = Fail Executive database check point (Current content of tables selected as actual) ! = Pass

(May be)Navigation:

Create menu database check point default check Specify connection to database using ODBC (Local Database) / Data Junction( For remote database or distributed database) Select specify SQL statement ( C:\ Program files \ Mercury interactive \ WinRunner \ Temp \ test name \ msqr1.sql) click next click create to select DSN ( EX. Machine data source Flight 32) Write select statement ( EX. Select * from orders;) click finish.

Syntax:db_check(“Check list file name.cdl”, “Query result file name.xls”);

In the above syntax checklist specifies content is the property. Query result file specifies results of the query in terms of content.

b) Custom Check:

Test engineers are conducting backend testing depending on rows contents, column contents and content of database tables.

But test engineers are not using this option because default check content also showing no of rows and column names.

c) Run Time Record Check:

To find mapping between front-end objects and backend columns test engineers are using this option. It is optional checkpoint in tester job, because test engineers are using this checkpoint when they got mismatch between front-end objects and backend columns. From Expected :

X aY b

To automate above like mapping testing, test engineers are using Run Time Record Checkpoint in WinRunner.

Navigation:

Create menu database checkpoint runtime record check click next click create to select DSN write select statement with doubtful columns (ex: select orders.order_number, orders.customer_name from orders) click next select doubtful front end objects for that

Page 31: testing tools for Beginners

columns click next select any one of three options ( exactly one matching record, one or more matching records and no matching records) click finish.

Syntax :db_record_check(“check list file name.crr”, DVR_ONE_MATCH /

DVR_ONE_OR_MORE_MATCH / DVR_NO_MATCH, Variable); In the above syntax checklist file specifies expected mapping between back end columns and

front end objects. Flags specifies type of matching Variable specifies that number of records matched

for(i=1; i<=5; i++)

{

set_window ("Flight Reservation", 3);

menu_select_item ("File;Open Order...");

set_window ("Open Order", 1);

button_set ("Order No.", ON);

edit_set ("Edit", "1");

button_press ("OK");

db_record_check("list1.cvr", DVR_ONE_OR_MORE_MATCH,

record_num);

}

Note: Runtime Record Checkpoint does not allow “ ; ” at the end of the select statement. It is a new concept in WinRunner 7.0.

4. Text Check Point:

To conduct calculations and other text based tests, we can use get_text option in WinRunner. This option consists of two sub options.

a) From object or Windowb) From Screen Area

a) From object or Window:

To capture object values in to variable we are using this option.

Navigation:

Create menu Get text From Object / window Select object (D Click).

Syntax:obj_get_text(“name of the object”, Variable);

Note: Above function is same as

Page 32: testing tools for Beginners

Input

Out put

edit_get_info(“edit box name”, “Value”, variable);

Example : Sample

Expected :Out put = In Put * 100

set_window(“sample”, 5);

obj_get_text(“input”,x);

obj_get_text(“output”,y);

if(y = = x * 100)

printf(“test is pass”);

else

printf(“test is fail”);

b) From Screen Area:

To capture static text from screen area we can use this option.

Navigation:Create menu get text from screen area select required region right click to release.

Syntax:obj_get_text(“object name”, variable, x1, y1, x2, y2 );

Example 1:

Getting text from object / window by using sub strings to cut some area of string.

set_window(“flight reservation”, 5);

obj_get_text(“tickets”, t);

obj_get_text(“price”, p);

p = substr( p , 2, length(p) - 1);

obj_get_text(“total”, tot);

tot = substr(tot , 2, length(tot) - 1);

if (tot = = t * p)

printf(“test is pass);

else

printf(“test is fail);

Page 33: testing tools for Beginners

QTY

PriceTotal

xx

Rs:xxx/-

Rs:xxx/-

Example 2:Shopping

Expected: Total = price * qty

set_window(“shopping”);

obj_get_text(“QTY”, q);

obj_get_text(“price”, p);

p=substr(p,4,length(p)-5);

obj_get_text(“Total”, tot);

tot=substr(tot,4,length(tot)-5);

if (tot = = q * p)

printf(“test is pass”);

else

printf(“test is fail”);

tl_step( ):

To create our own pass / fail result in result window, we can use this statement.

Syntax:tl_step(“step name”, 0 / 1, “description”);

in the above syntax tl stands for test log(test results).0 / 1 : 0 pass, 1(non zero) fail

Data Driven Test (DDT):

DDT is nothing but a retest. To executive one test more than one time on same application build with multiple test data.

There are four types of DDT tests to validate functionality testing.

a) Dynamic Test Data Submission.b) Through Flat Files (.Txt)c) From Front End Grits ( List, Menu, Table, ActiveX And Data Window)d) Through Excel Sheet

a) Dynamic Test Data Submission:

Some times test engineers are conducting re-testing depends on multiple test data through manual submission.

Page 34: testing tools for Beginners

BUILDTest DataKey

Board

Input 1

Input 2

Result

OK

From the above model test engineers are submitting test data through keyboard. To record value from keyboard during test execution, we can use below TSL statement.

Syntax:create_input_dialog(“message”);

Example 1:

for(i=1; i<=5; i++)

{

x = creat_input_dialog(“ Enter order No”);

set_window ("Flight Reservation", 3);

menu_select_item ("File;Open Order...");

set_window ("Open Order", 1);

button_set ("Order No.", ON);

edit_set ("Edit_1", x);

button_press ("OK");

}

Example 2:

Multiply

Expected:Result = input 1 * input 2Test data in paper: 10 pairs of inputs

for(i=1; i<=10; i++){

x = creat_input_dialog(“ Enter Input 1”);y = creat_input_dialog(“ Enter Input 2”);set_window ("Multiply", 3);edit_set ("Input 1", x);edit_set ("Input 2", y);

Page 35: testing tools for Beginners

Item No

QTY

Price Total

OK

$ $

Expected:

Total = Price * QTY

Test data in paper : 10 pairs of item no and QTY

button_press ("OK");obj_get_text(“result”, temp);

if(temp = = X * Y)

tl_step(“step”,0, “Pass”);

else

tl_step(“step”,1,“fail”);}

Example 3:

Shopping

for ( i = 1; I<=10; i++ ){

x = create_input_dialog(“Enter Item No”);y = create_input_dialog(“Enter QTY”);set_window(“Shopping”, 5);edit_set(“Item No”, x);edit_set(“QTY”, y);button_press(“OK”);obj_get_text(“Price”, p);p = substr(p,2,length(p)-1);obj_get_text(“Total”, tot);tot = substr(tot,2,length(tot)-1);if (tot = p * y)

tl_step(“step1”, 0 ,“Test is pass”);else

tl_step(“step1”, 1, “Test is fail”);}

Page 36: testing tools for Beginners

User ID

Pwd

Expected: If next enabled user is authorised If next is disabled user is unauthorised

Test data in paper : 10 pairs of user ID’s & Passwords. OK Next

Example 4:

Login

for(i=1; i<=10; i++){x = create_input_dialog(“Enter User ID”);y = create_input_dialog(“Enter Pwd”);set_window(“Login”,5);edit_set(“User ID”, x);password_edit_set(“Pwd”, passwore_encrypt(y));button_press(“OK”);button_get_info(“next”, “enabled”, n);if ( n = = 1)

tl_step(“step1”, 0, “User is Authorised”);else

tl_step(“step1”, 1, “User is Unauthorised”);}

b) Through Flat Files:

Some times test engineers are conducting re-testing depends on multiple test data from flat file.

To prepare above model automated test scripts, test engineers are using few file functions in WinRunner.

1. file_open( ):

we can use this function to open file into RAM with required permissions.

BUILDTest Data

. txt

Test Screen

Page 37: testing tools for Beginners

Input 1

Input 2

Result

OK xx xxxxxx xxxxxxx xxx . ..

Syntax:file_open(“File Path”, FO_MODE_READ / FO_MODE_WRITE / FO_MODE_APPEND);

2. file_getline( ):

We can use this function to read a line from opened file in READ MODE.

Syntax :file_getline(“path of file”, Variable);

Note: in TSL file pointer incremented automatically up to end of file.

3. file_close( ):

we can use this function to sweep out a opened file from Ram .

Syntax:file_close(“path of file”);

Example1:

f="c:\\My Documents data.txt";file_open(f,FO_MODE_READ);while(file_getline(f,s) !=E_FILE_EOF){set_window ("Flight Reservation", 5);menu_select_item ("File;Open Order...");set_window ("Open Order", 2);button_set ("Order No.", ON);edit_set ("Edit",s);button_press ("OK");}file_close(f):

Example 2: Multiply

Expected:Result = input 1 * input 2Test data in file: c:\\My Documents data.txt

f="c:\\My Documents data.txt";file_open(f,FO_MODE_READ);while(file_getline(f,s) !=E_FILE_EOF)

Page 38: testing tools for Beginners

Item No

QTY

Price Total

OK

$ $

Expected: Total = Price * QTY

Ram purchase 101 items as 10 pieces

{split(s,x “ “);set_window ("Multiply", 3);edit_set ("Input 1", x[1]);edit_set ("Input 2", x[2]);button_press ("OK");obj_get_text(“result”, temp);

if(temp = = x[1] * x[2])

tl_step(“step”,0,“Pass”);

else

tl_step(“step”,1, “fail”);}file_close(f);

Example 3:

Shopping

Test data in file: c:\\My Documents data.txt

f="c:\\My Documents data.txt";file_open(f,FO_MODE_READ);while(file_getline(f,s) !=E_FILE_EOF){split(s,x “ “);set_window(“Shopping”, 5);edit_set(“Item No”, x[3]);edit_set(“QTY”, x[6]);button_press(“OK”);obj_get_text(“Price”, p);p = substr(p,2,length(p)-1);obj_get_text(“Total”, tot);tot = substr(tot,2,length(tot)-1);if (tot = p * x[6])

tl_step(“step1”, 0 ,“Test is pass”);else

tl_step(“step1”, 1, “Test is fail”);}file_close(f);

Page 39: testing tools for Beginners

xxxx@xxx xx

xx xx

Example 4:

Login

f="c:\\My Documents data.txt";file_open(f,FO_MODE_READ);while(file_getline(f,s) !=E_FILE_EOF){split(s,x “@“);split(x[2],y,“ “);set_window(“Login”,5);edit_set(“User ID”, x[1]);password_edit_set(“Pwd”, passwore_encrypt(y[2]));button_press(“OK”);button_get_info(“next”, “enabled”, n);if ( n = = 1)

tl_step(“step1”, 0, “User is Authorised”);else

tl_step(“step1”, 1, “User is Unauthorised”);}file_close( );

4. file_printf( ):

We can use this function to print specified text into a file. If file is opened in WRITE / APPEND MODE.

Syntax :

file_printf(“Path of file”, “format”, “values / variables);

EX:a b

a = xx and b == xx

file_printf(“xxxx”, “a = %d and b = %d”, a, b);

User ID

Pwd

Expected: If next enabled user is authorised If next is disabled user is unauthorisedTest data in file: c:\\My Documents data.txt

OK Next

Page 40: testing tools for Beginners

Fly From

Fly TO

5. file_compare( ):

We can use this function to compare two files content.

Syntax:file_compare(“path of file1”, “path of file2”, “path of file3”);

In the above syntax third argument is optional. It specifies concatenated content of both compared files.

c) From Front-end Grids:

Some times test engineers are conducting re-testing depends on multiple data objects such as list, menu, ActiveX, table, data window.

Example 1:

Journey

set_window(“Journey”, 5);list_get_info(“Fly From”, “count”, n);for(i=0; i<n; i++)

{list_get_item(“Fly From”, i, x); list_select_item(“Fly From”, x);if (list_select_item(“Fly To”, x) !=E_OK)

tl_step(“step”, 0 , “Does no appears”);else

tl_step(“step”, 1 , “Appears and Test is fail ”);

}

In WinRunner every TSL returns E_OK when the statement successfully executed on our build.

Example 2:

Test Screen

BUILD

Test Data

▼ ▼Expected:Selected item does not available in fly to.

Page 41: testing tools for Beginners

Display

Sample 1 Sample 2

Expected :Selected item in list box appears in text box as below modelMy Name is XXXXX.

set_window(“Sample 1”, 5);list_get_info(“Name”, “count”, n);

for(i=0; i<n; i++){set_window(“Sample 1”, 5);list_get_item(“Name”, i , x);list_select_item(“Name”, x);button_press(“OK”);set_window(“Sample 2”, 5);button_press(“Display”);obj_get_text(“Text”, temp);if ( temp = = “My Name is” & x)

tl_step(“step”, 0 , “Test is Pass”);else

tl_step(“step”, 1 , “Test is fail”);}

Note: In TSL & = Concatenated (Adding two words etc.)

Example 3:

Employee

set_window(“Employee”,5);list_get_info(“EMP No”, “count”, n);

for (i = 0; i < n; i++){list_get_item(“EMP No”, i,x);list_select_item(“EMP No”, x);button_press(“OK”);obj_get_text(“bsal”, b);obj_get_text(“gsal”, g);

Name

Text

OK

EMP No

bsal gsal

Expected: If bsal >= 15000 than gsal = bsal + 10% of bsal If bsal < 15000 and > = 8000 than gsal = bsal + 5% of bsal If bsal < 8000 than gsal = bsal + 200

OK

Page 42: testing tools for Beginners

Type

Age

Gender

Qualification

if ( b > = 15000 && g ==b+b*10/100)tl_step(“step1”, 0 , “Calculation is Pass”);

else if ( b < 15000 && b >= 8000 && g == b+b*5/100)tl_step(“step1”, 0 , “Calculation is Pass”);

else if ( b < 8000 && g == b+200)tl_step(“step1”, 0 , “Calculation is Pass”);

elsetl_step(“step1”, 1 , “Calculation is fail”);

}

Example 4:Insurance

Expected :If type is “A” Age is focusedIf type is “B” Gender is focusedAny other type Qualification is focused

set_window(“Insurance”, 5);list_get_info(“Type”, “count”, n);for (i = 0; i < n; i++){list_get_item(“Type”, i,x);list_select_item(“Type”, x);if (x == A)edit_check_info(“Age”, “ focused ”, 1);else if (x == B)list_check_info(“Gender”, “ focused ”, 1);else list_check_info(“Qualification”, “ focused ”, 1);}

Page 43: testing tools for Beginners

File_storeS.NoFile PathTypeSize1XX10kb2XX20kb3XX30kb4XX40kb5xx50kb

Total

xxxkb

Example 5:AUDIT

Expected:

Total = Sum of size column

Sum = 0set_window(“AUDIT”, 5);tbl_get_rows_count(“file_store”, n);

for ( i=1; i<n; i++) {tbl_get_cell_data(“file_store”, “#”&I, “#3”, s);s=substr(s,1,length(s)-2)sum = sum + s}obj_get_text(“Total”, tot);tot=substr(tot,1,length(tot)-2);

if (tot == sum)tl_step(“step1”, 0 , “calculation is pass”);

elsetl_step(“step1”, 1 , “calculation is fail”);

6. list_get_item( ):

We can use this function to capture specified list item through Item number. Here item number starts with “0”.

Syntax:list_get_item(“list box name”, Item No, Variable);

7. tbl_get_rows_count( ):

We can use this function to find no of rows in table grid.

Syntax:tbl_get_rows_count(“Table grid name”, variable ):

8. tbl_get_cell_data( ):

We can use this function to capture specified cell value into a variable through row no & column no.Syntax:

Page 44: testing tools for Beginners

tbl_get_cell_data(“Table Grid name”, “# row no”, “# column no”, variable);

d) Through Excel Sheet:

In general testing engineers are conducting data driven test using Excel Sheet data. This method is default method in data driven testing. To create this type of automated script WinRunner provides special navigation.

Navigation:

Create test for one script tools menu dada driven wizard click next browse the path of excel sheet (c:\PF\MI\WR\Temp\testname\default.xls) specify variable name to assignee path (by default table) select import data from database click next select type of data base connection (ODBC or Data Junction) select specify SQL statement (c:\PF\MI\WR\Temp\testname\msqrl.sql) click next click create to select data source name write SQL statement (select order_number from order) click next select excel sheet column names in required place of test script select show data table now click finish click run analyse results manually

Example1:

table = "default.xls"; rc = ddt_open(table, DDT_MODE_READWRITE);if (rc!= E_OK && rc != E_FILE_OPEN)

pause("Cannot open table.");ddt_update_from_db(table, "msqr1.sql", count);ddt_save(table);ddt_get_row_count(table,n);for(i = 1; i < = n; i++){

ddt_set_row(table,i);set_window ("Flight Reservation", 6);menu_select_item ("File;Open Order...");set_window ("Open Order", 1);button_set ("Order No.", ON);edit_set ("Edit", ddt_val(table,"order_number"));button_press ("OK");

}ddt_close(table);

1. ddt_open( ):

We can use this function to open a test data excel sheet into RAM with specified permissions.

Syntax:ddt_open(“path of excel file”, DDT_MODE_READ / READWRITE);

Page 45: testing tools for Beginners

2. ddt_update_from_db( ):

We can use this function to extend excel sheet data w.r.t changes in database.

Syntax:ddt_update_from_db(“path of excel file”, “path of query file”, variable);

3. ddt_save( ):

We can use this function to save excel sheet modifications permanently.

Syntax:ddt_save(“Path of excel sheet”);

4. ddt_get_row_count( ):

We can use this function to find total no of rows in excel sheet.

Syntax:ddt_get_row_count(“path of excel sheet”, variable);

5. ddt_set_row( ):

We can use this function to point a row in excel sheet.

Syntax:ddt_set_row(“path of excel sheet”, row no);

6. ddt_val( ):

We can use this function to capture specified column value from a pointed row.

Syntax:ddt_val(“path of excel sheet”, “column name”);

7. ddt_set_val( ):

We can use this function to write a value into excel sheet column.

Syntax:ddt_set_val(“path of excel sheet”, “column name”, value / variable);

8. ddt_close( ):

We can use this function to swapout a open excel sheet from RAM.Syntax:

ddt_close(“path of excel sheet”);

Page 46: testing tools for Beginners

Expected:Factorial of input in the result

Example 2:

Prepare data driven test script for below scenario.default.xls

Input1 Input2 Resultxx xxxx xxxx xxxx xxExpected: Result should be written in excel sheet ( Input1 + Input2)

table = "default.xls"; rc = ddt_open(table, DDT_MODE_READWRITE);if (rc!= E_OK && rc != E_FILE_OPEN)

pause("Cannot open table.");ddt_get_row_count(table,n);for(i = 1; i <= n; i ++){

ddt_set_row(table,i);a=ddt_val(table,"Input1");b=ddt_val(table,"Input2");c=a+bddt_set_val(table,"result",c);ddt_save(table);

}ddt_close(table);

Example 3:

Prepare test script for below scenario.default.xls

Input Resultxxxxxxxx

table = "default.xls"; rc = ddt_open(table, DDT_MODE_READWRITE);if (rc!= E_OK && rc != E_FILE_OPEN)

pause("Cannot open table.");ddt_get_row_count(table,n);for(i = 1; i <= n; i++){

ddt_set_row(table,i);x=ddt_val(table,"input");fact=1;

Page 47: testing tools for Beginners

for(j = x; j >= 1;j--)fact=fact*j

ddt_set_val(table,"result",fact);ddt_save(table);

}ddt_close(table);

Example4:

Prepare test script to print a list box values into a flat file one by one.

f="c:\My Documents\sm.txt";file_open(f,FO_MODE_WRITE);set_window ("Flight Reservation",10);list_get_info("Fly From:", "count",n);for(i=0; i<n; i++){

list_get_item("Fly From:",i,x);file_printf(f,"%s\r\n",x);

}file_close(f);

Example4:

Prepare test script to print a list box values into a excel sheet one by one.

f="c:\My Documents\sm.xls";file_open(f,FO_MODE_WRITE);set_window ("Flight Reservation", 10);set_window ("Flight Reservation", 13);list_get_info("Fly From:", "count", n);for(i=0; i<n; i++){

list_get_item ("Fly From:",i,x);file_printf(f,"%s\n",x);

}file_close(f);

Synchronization Point:

To maintain time mapping between testing tool and application build during test execution, we can use this concepts.

1. wait ( ):

This function defines fixed waiting time during test execution.

Page 48: testing tools for Beginners

Syntax:wait( time in seconds);

2. For Object / Window Property :

WinRunner waits until specified object property is equal to our expected value.

Navigation:Select position in script create menu synchronization point for object / window property select indicator object (Ex: Status or progress bar) select required property with expected (100% enabled, <100% disabled) specify maximum time to wait click paste.

Syntax:obj_wait_info(“object Name”, “property”, Expected value, maximum time to wait);

3. For Object / Window Bitmap:

Some times test engineers are defining time mapping between tool and application depends on Images also.

Navigation:Select position in script create menu synchronization point for object/window Bitmap select indicator image (D click).

Syntax: obj_wait_bitmap(“Image object name”, “Image file name.bmp”, maximum time to wait);

4. For Screen Area Bitmap:

Some times test engineers are defining time mapping between testing tool and application depends on part of images also.

Navigation:Select position in script create menu synchronization point for screen area Bitmap select required image region right click to release.Syntax:obj_wait_bitmap(“Image object name”, “Image file name.bmp”, maximum time to wait, x, y, width, height);

5. Change Runtime Settings:

During test script execution, recording time values are not useful. During running, WinRunner depends on two runtime parameters. Test engineers are performing changes in the parameters if required.

Page 49: testing tools for Beginners

Delay for window synchronization – 1000 msec(Default) Timeout to execute context sensitive and checkpoints - 10000m,sed (Default)

Navigation:Settings menu general options change delay and time out depends on requirements click apply click ok.

BATCH TESTING

The sequential execution of more than one test to validate functionality is called batch testing. To increase intention of bugs finding during test execution, batch testing is suitable criteria. The test batch is also known as test suit or test set. Every test batch consists of a set of multiple dependent tests. In every test batch end stage of one test is base state of next test.To create this type of batches in WinRunner, we can use below statements.

a) call testname( );b) call “path of test”( );

We can use first syntax when corresponding calling & called tests both in same folder.We can use second syntax when calling & called tests both are in different folders.

Example 1:Test case1 Successful order openTest case2 Successful up-dation.

Example 2:Test case1 Successful new user registration.Test case2 Successful loginTest case3 Successful mail open.Test case4 Successful mail reply

Example 3:Test case1 Successful order openTest case2 Successful calculation.

Page 50: testing tools for Beginners

call subtest(xx );

edit_set(“edit”, x);

Main test / Calling test

Sub test / Called test

xx

X

inputXXXXXXDefault.xls

Parameter Passing:To transmit values between one test to other test, we can use parameter passing concepts in batch testing.

From the above model sub test maintains parameters to receive values from main test. To create this type of parameters we can follow bellow navigation.

Navigation:Open sub test → file menu → test properties → click parameter tab → click add to create new properties → enter parameter name with description → click ok → click add to create more parameters → click ok → use that parameter in required place of test script.

DATA DRIVEN BATCH:

Main Test:

call subtest( );

Main test / Calling test

Sub test / Called test

call subtest(xx );

edit_set(“edit”, x);

Main test / Calling test

Sub test / Called test

xx

X

Page 51: testing tools for Beginners

table = "default.xls"; rc = ddt_open(table, DDT_MODE_READ);if (rc!= E_OK && rc != E_FILE_OPEN);

pause("Cannot open table.");ddt_get_row_count(table,n);for(i = 1; i <= n; i ++){

ddt_set_row(table,i);temp=ddt_val(table,"input");call subsri(temp);set_window("Flight Reservation",1);obj_get_text("Tickets:",t);obj_get_text("Price:",p);p=substr(p,2,length(p)-1);obj_get_text("Total:",tot);tot=substr(tot,2,length(tot)-1);if(tot==p*t)

tl_step("s1",0,"test is pass");else

tl_step("s1",1,"test is fail");}ddt_close(table);

Sub Test:set_window ("Flight Reservation", 2);menu_select_item ("File;Open Order...");set_window ("Open Order", 1);button_set ("Order No.", ON);edit_set ("Edit", x);button_press ("OK");set_window("Flight Reservation",1);obj_get_text("Name:",t);if(t= =" ")pause("cannot open record");

treturn( ); :

We can use this function to return a value from sub test to main test.

Syntax:treturn( Value / Variable);

Note: It allows only one value to return.

Page 52: testing tools for Beginners

Silent Mode:WinRunner allows you to continue test execution when a Checkpoint is fail also. To define this type of situation we can follow below navigation.

Navigation:Select Menu → general options → run tab → select run in batch mode → click apply → click ok.

Note: When WinRunner in silent mode, tester interactive statements are not working. EX: create_input_dialog(“xxxxx”);

Public Variables :

To access a single variable in more than one tests in a batch.

Syntax:public variable;

Note: By default variables are local in TSL scripts.

FUNCTION GENERATOR:

It is a list of TSL functions library. In this library TSL functions classified into category wise. To search required TSL function below navigation

Create menu → insert function → From function generator → select required category → select required function based on description → fill arguments → click past.

Example 1:

Clipboard TestingA tester conducts test on selected part of an object.

t=call subtest(xx );if(t= =1){

}elsecontinue;

if (condition) treturns(1); else treturns(0);

Main test / Calling test

Sub test / Called test

xx

X

inputXXXXXxDefault.xls

Page 53: testing tools for Beginners

set_window(“login”, 5);edit_get_selection(“Agent Name”,v);Printf(v);

Syntax:edit_get_selection(“Name of edit box”, variable);

Example 2:

Window Existence Test:

Whether specified window is available on desk top or not.

Syntax:win_exists(“window name”, time);

In above syntax time specifies delay before, test existence of window.This function returns E_OK if window exists else E_NOT_FOUND.

Case Study:Fail pass

test 1 “sample” window

test 2 ↓

test 3 ↓test 4

call test 1( );if(win_exists(“sample”, 0) = = E_OK)

call test 2( );call test 3( );call test 4( );

else call test 3( );call test 4( );

Example 3:

Open Project:

WinRunner allows you to open a project during test execution.(System Category).

Syntax:invoke_application(“Path of .exe”, “Command”, “Working Directory”, SW_SHOW /

SW_SHOWMINIMISE / SW_SHOWMAXIMISE);

Example 4:

Search TSL function to print System Date?

Page 54: testing tools for Beginners

Front End Back End

db_connect( );db_execute_query( );db_write_records( );

TSL Script

Prepared “select” Statement

Excel / Flat file

DSN

Example 5:

Search TSL function to print time out.

Syntax:getvar(“timeout_msec);

X = getvar(“timeout_msec”);printf(X);

Example 6:

Search TSL function to change time out with out using settings menu.

Syntax:setvar(“time out”, time in sec);

Example 7:

Search TSL function to print parent path of WinRunner.

Example 8:

Search TSL function to print path of current test.

Example 9:

Execute Prepared Query:

WinRunner allows you to execute prepared queries. A prepared query consists of variable in structure of that query, this query also known as dynamic query.

Page 55: testing tools for Beginners

Mail Open Mail Compose Mail Reply Mail forward

Login Only Navigation

db_connect( ):

We can use this function to connect database using existing DSN.

Syntax:

db_connect(“session Name”, “DSN=****”);

In above syntax session name indicates allocated resources to user when he connected to database.

db_execute_query( ):

We can use this function to execute specified select statement on that connected database.

Syntax:db_execute_query(“session name”, “select statement”, variable);

In above syntax variable specifies no of rows selected after execution of that statement.

db_write_records( ):

We can use this function to copy query result into specified file.

db_write_records(“session name”, “destination file path”, TRUE / FALSE, NO_LIMIT);

In above syntax TRUE indicates query result with header and FALSE indicates query result with out header.

NO_LIMIT specifies that no restrictions on query result size.

Note : These three functions available in database category.

Example:

x=create_input_dialog("enter limit");db_connect("query1","DSN=Flight32");db_execute_query("query1","select * from orders where order_number<="&x,num);db_write_records("query1","default.xls",FALSE,NO_LIMIT);

User Defined Functions:

Like as programming language, TSL allows you to create user-defined functions.Every user-defined function indicates a repeatable navigation in your build w.r.t testing.

UDF

Page 56: testing tools for Beginners

i=10;

i=100;

10 100

Test Scripts(Check Points)

Syntax:public / static function function name( in / out / inout argument name){

# repeatable test script

return ( );}

In the above syntax, public function invoked in any test.Static function maintains constant locations to variables during execution.

Static Function

Note: We can use static function to maintain output of onetime execution as input to next time execution.

→ “in” Parameter working as general argument.→ “out” Parameters working as return values.→ “inout” parameters are working as in and out.

User defined functions allows return statements to return one value.

Example:

public function add(in x, in y, out z){z = x + y;}

calling test:

a= 10;

b = 20;

add(a, b, c);

printf( c );

Example2:

Page 57: testing tools for Beginners

public function add(in x, in y){auto z;z = x + y;return(z);}

calling test:

a= 10;

b = 20;

c = add(a, b);

printf( c );

Example 3:

public function add(in x, inout y){y = x + y;

}

calling test:

a= 10;

b = 20;

add(a, b);

printf( b );

Example 4:

public function open(in x){

set_window ("Flight Reservation", 2);menu_select_item ("File;Open Order...");set_window ("Open Order", 1);button_set ("Order No.", ON);edit_set ("Edit", x);button_press ("OK");

}

To call user defined functions in required test scripts, we can try to make user defined function as .EXE copies. To do this task, test engineers are following below navigation.

Open WinRunner → click new → record repeatable navigation’s as UDF’s → save the module in dat folder → file menu → test properties → general tab → change test type Compiled module →

Page 58: testing tools for Beginners

button_press(“OK”);

Logical Name : OK{

class : push buttonlabel : “OK”

}

OK

1

5

43

2

click apply → click OK → execute once(permanent .EXE created for that user defined functions in hard disk) → write load statement → in startup script of WinRunner (c:\Program Files \ Mercury Interactive \ WinRunner \ dat \ myinit).

load( ):

We can use this statement to load user defined .EXE from hard disk to RAM.

Syntax:load(“compiled module name”, 0 / 1, 0 / 1);

In above syntax → first 0 / 1 defines user defined / system defined.

→ second 0 / 1 defines path appears in WinRunner window menu / path doesn’t appears

Note: We can use this load statement in Startup Script of WinRunner.

unload( ):

We can use this function to unload unwanted functions from RAM. We can use this statement in our test scripts if required.

Syntax:unload(“path of compiled module”, “unwanted function name”);

reload( ):

We can use this function to reload unloaded functions once again.

Syntax:reload(“path of compiled module”, “unloaded function name”);

ORreload(“path of compiled module”, 0/1, 0/1); → loads all functions

LEARNING

In general test automation process starts with learning to recognize objects and windows in your application build. WinRunner 7.0 supports auto learning and pre learning.

1. Auto Learning:

During recognization time, WinRunner recognize all objects and windows what you operated.

GUI MAP

Page 59: testing tools for Beginners

SaveOpen

Explicitly

Test 1 GUI Map

Test2

.gui

SaveOpen

Implicitly

Test 1 GUI Map

Test2

.gui

Step 1: Start recordingStep2: Recognize object During RecordingStep3: Script generation

Step4: Catch entryStep5: Catch object During running

To recognize entries WinRunner maintained in GUI MAP. To edit this entries, we can follow navigation.

Tools menu → gui map editor.

To maintain these entries, test engineers follows two types of modes.

a) Global GUI Map File.b) Per Test Mode.

a) Global GUI Map File:

In this mode WinRunner maintains common entries for objects and Windows in a single file

If test engineers forgot entries saving, WinRunner maintains that unsaved entries in default buffer (10kb). To open buffer, test engineers follows bellow navigation.

Tools → GUI Map editor → view menu → GUI Files(LO < temporary >).

To save / open GUI Map entries, test engineers use file menu options in GUI Map editor.

b) Per Test Mode:

In this mode WinRunner maintains entries for objects & windows per every test.

Page 60: testing tools for Beginners

In general WinRunner maintains Global GUI File.If we have to change to Per Test Mode, we can use bellow navigation.

Settings menu → general options → environment tab → select GUI Map File Per Test → click apply → click ok.

Note: In general test engineers are using global GUI Map file mode.

2. Pre Learning:

In general test engineers jab starts with learning in lower versions of WinRunner (ex 6.0, 6.5). Because auto learning is new concept in WinRunner 7.0.

To conduct this Pre Learning before starts recording, we can use rapid test script wizard (RTSW).

Open Build & WinRunner → create menu → Raped Test script wizard → click next → show application main window → click next → select no test → click next → enter sub menu symbol(…, >>,→) → click next → select pre learning mode(express, comprehensive) → learn →say yes / no to open project automatically during WinRunner launching → click next → remember paths of startup scripts and GUI Map File → click next → click ok.

Some times test engineers perform changes in entries w.r.t test requirements.

Situation 1: (Wild Card Characters)

Some times our application objects / windows labels are variating depends on multiple input values. To create data driven test on this type of object / window, we can perform changes in corresponding object / window entry with Wild Card Characters.

Original Entry

Logical name : fax order no1{

class : windowlabel : “fax order no. 1”

}

Modified Entry

Logical name : fax order no1{

class : windowlabel : “fax order no. 1”

}

Page 61: testing tools for Beginners

To perform above like change, we can follow below navigation.

Tools → GUI Map editor → select corresponding entry → click modify → insert wild card changes like as above example → click ok.

Situation 2 : (Regular Expression)

Some times our application build objects / windows labels are variating depends on events.To create data driven test on this type of objects and windows, we can perform change in entry using regular expression.

Sample Sample

Original EntryLogical name :start{

class: push buttonlabel : “start”

}

Modified EntryLogical name :start{

class: push buttonlabel : “![s][to][a-z]*”

}

Situation 3: (Virtual Object Wizard)

Some times WinRunner is notable to recognize advanced technology objects in our application build. To forcibly recognize that non recognized objects, we can use Virtual Object Wizard.

Navigation:Tools menu → virtual object wizard →click next → select expected type → click next → click mark object to select non recognized area → right click to release → click next → enter logical name to that entry → click next → say yes / no to create more virtual objects → click finish.

Situation 4: (Mapped to Standard Class)

Some times WinRunner is notable to return all available properties to a recognized object. To get all testable properties for that object we can follow below navigation.

Navigation:Tools Menu → GUI Map Configuration → click add → Show non testable object → click ok → click configuration →select mapped to standard class → click ok.

Situation 5:( GUI Map Configuration)

Some times more than 1 objects in a single window consists of same physical description w.r.t WinRunner defaults (Class & Label).

Navigation:

Start Stop

Page 62: testing tools for Beginners

Tools Menu → GUI Map Configuration → select object type → click configuration → select distinguishable properties into obligatory and optional list. → click ok.

Note: In general test engineers are maintaining MSW_id as optional for every object type. Because every objects consists of unique MSW_id.

Sample

Logical Name : OK{

class : push buttonlabel : “OK”MSW_id : XXXX

}

Logical Name : OK_1{

class : push buttonlabel : “OK”MSW_id : XXXX

}

Situation 6: (Selective Recording)

WinRunner allows you to perform recording on specified applications only.

Navigation:Settings → General options → record tab → click selective recording → select record only on selected applications → select record on start menu & Windows explorer if required → Browse required project path → click OK.

a) USER INTERFACE TESTING:

WinRunner is a functionality testing tool but it provides a facility to conduct user interface testing also. In this testing WinRunner applies Microsoft 6 rules on our application interface. → Controls are initcap (Starts with Upper Case)→ OK / Cancel existence→ System menu existence→ Controls are must visible→ Controls are not over lapped→ Controls are aligned.

To apply above six rules on our application build, WinRunner uses below TSL functions.

a) load_os_api( ):

We can use this function to load application program in interface system calls into RAM.

OK OK

Page 63: testing tools for Beginners

Screen Level Differences

Syntax:load_os_api( );

Note: With downloading api system call into RAM, we are not able to conduct user interface testing.

b) configure_chkui( ):

We can use this function to customize Microsoft,s six rules to be applied on our application build.

Syntax:configure_chkui(TRUE / FALSE, …….);

c) check_ui( ):

We can use this function to apply above customized rules on specified window.Syntax:

check_ui(“Window Name”);

To create user interface test script, test engineers follows below navigation.

Open WinRunner / Build → create menu → RTSW → click next → show application main window → click next select user interface test → click next → specify sub menu symbol → click next → select learning mode → click learn → say YES / NO to open your application automatically during WinRunner launching → remember paths of start up scripts & GUI Map file → remember path of user interface test script → click ok → click run → analyze results manually.

Note: Some times RTSW doesn’t appears in create menu.

a) If you select wed test option in add in manager.b) If you are in per test mode.

b) REGRESSION TESTING:

In general test engineers follows below approach after receiving modified build from developers.

Receive Modified Build↓

GUI Regression↓

BIT Map regression↓

Functionality Regression

WinRunner provides a faility to automate GUI Regression & BIT Map Regression.

i. GUI Regression Test:

We can use this option to find object properties level differences in between old build and new build.

Page 64: testing tools for Beginners

Navigation :Open WinRunner / Build → create menu → RTSW → click next → show application main window → click next→ select use existing information → click next → select GUI Regression test script→ click next → remember path of GUI Regression test script → click ok → open modified build and close old build → click run → analyze results manually.

ii. BIT Map Regression Test:

We can use this option to find image level differences between old build and modified build. This regression is optional, because all screens does not consists of images. Navigation :

Open WinRunner / Build → create menu → RTSW → click next → show application main window → click next→ select use existing information → click next → select BIT Map Regression test script→ click next → remember path of BIT Map Regression test script → click ok → open modified build and close old build → click run → analyze results manually.

Exceptional Handling:

Exception is nothing but runtime error. To handle test execution errors in WinRunner, we can use three types of exceptions.

a) TSL Exceptionsb) Object exceptionsc) Popup Exceptions.

a) TSL Exceptions:

We can use these exceptions to handle run time errors depends on TSL statements return code.

Old Build Modified Build

GUI Check Point

set_window(“X”,5);

E_NOT_FOUND

How to open X window

Test Script Handler Function

Page 65: testing tools for Beginners

To create above like exceptions, we can follow below navigation.

Navigation:

Tools → exception handling → select exception type as – TSL → click next → enter exception name → enter TSL function name → specify return code → enter handler function name → click ok → click paste → click ok after reading suggestion → click close → record our required navigation to recover the situation → make it as compiled module → write lode statement of it in start up script of WinRunner.

Example:

public function mindq(in rc, in func){

printf(func& “returns” &rc);}

b. Object Exceptions:

The exception raised when specified object property = our expected.

Tools → exception handling → select exception type as Object → click new → enter exception name → select traceable object → select property with expected to determine situation → enter handler function name → click ok → click paste → click ok after reading suggestion → click close record our required navigation to recover the situation → make it as compiled module → write lode statement of it in start up script of WinRunner.

Example:

public function mindq(in win, in obj, in attr, in val){

printf(func& “enabled”);}

c. Pop-UP Exceptions: These exceptions raised when specified window come to focus. We can use these exceptions to skip unwanted windows in our application build during test execution.

down

Disable Enable Reestablish connection to server

BUILD Test Script

Page 66: testing tools for Beginners

Tools → exception handling → select exception type as Pop-Up → click new → enter exception name → show unwanted window raising during testing → select handler action( “press enter / click cancel, click OK” and user defined function name) → click ok → click close.

To administrate exceptions during test execution, test engineers use below statements.

i. exception_off( ):

We can use this function to disable specific exception only.

Syntax:exception_off (“exception name”);

ii. exception_off_all( ):

We can use this function to disable all types of exceptions in your system.

Syntax:exception_off_all( )

iii. exception_on( ):

We can use this function to enable specified exception only.

Syntax:exception_on(“exception Name”);

Note: By default exceptions in “ON” position.

Page 67: testing tools for Beginners

WEB TESTING

WinRunner allows you to automate functionality testing on web interfaces also(HTML). WinRunner does not support XML objects.

In this test automation, test engineers apply below coverage’s on web interfaces.

1. Behavioral Coverage2. Input Domain Coverage3. Error handling Coverage (Clint & server Validation)4. Calculations Coverage 5. Back End Coverage6. URL (Uniform Resource Locator) Coverage7. Static text testing

In above coverage’s, URL’s testing and static text testing are new coverage’s for Web application functionality testing.

Clint / Server Vs WEB:Tow tire Application

Monitoring, Manipulation Data Store

Three Tire Application

I. URL’s Testing:

Front End Back End

Fat ThinT

DSN

VB, VC++, Java, D2K, C, C++,PB……

Oracle, SQL server, MS-Access, Sybase, Mysqc, Informis

HTML, DHTML, XML, Java Script……

Data Base

Server

DSNASP, JSP, VB Script, Java server relets, RHPCF, Java Scripts

TCP/TP

Monitoring Manipulation Data Storage

Page 68: testing tools for Beginners

It is an extra coverage in web applications testing. During this test, test engineers validate links execution and links existences. Links execution means that whether the link is providing right page or not, when you click link. Link existence means that whether the corresponding link in right place or not.

To automate this testing using WinRunner, we can select web test option in add in manager during WinRunner launching. We can use GUI Check Point concept to automate URL’s testing. In this automation, test engineers are creating check points on text links, image links, cell, tables and frame.

a. Text Link:

It is a non standard object and it consists of a set of non standard properties such as,

1. Background colour (Hexadecimal no of colours)2. Broken Link (Valid / Not valid)3. Colour (Hexadecimal no of expected colours)4. Font (Style of text)5. Text (Expected like text)6. URL expected path of next page.

Syntax:obj_check_gui(“check list”, “Checklist file name”, “expected value file.txt”, time to create);

b. Image link:

It is also a non standard object and it consists a set of non standard parameters such as.

1. Broken Link (Valid / Not valid)2. Image content (.bmp of image)3. Source (Path of Image)4. Type ( Plain Image, dynamic Image, Image link, image button, previously saved site image ex.

Banner)5. URL (path of next page)

Syntax:obj_check_gui(“image file name”, “checklist file”, “expected value file.txt”, time to create);

To create above like check points, test engineers are collecting below like information from development team.

Link Name Off line URLXxxxxx xxxxxxXxxxxxx xxxxxxxxxxxxx xxxxxx

Above document is also known as site map document.Before this web functionality testing developers create two types of off line environments.

DSNTCP/TP

Browser Web Server Data Base Server

Local Host Local Server (http://local server/ vdir/page.htm)

Local host (http://local host/ vdir/page.htm)

Page 69: testing tools for Beginners

c) Cell:

Cell indicates an area of web page. It contains of a set of text links & image links. To cover all these links through a single checkpoint, we can use cell property.

To get cell properties, test engineers select object first and then they change their selection from object to parent cell.

1. Background colour (Hexadecimal no of colours)2. Broken Link (Valid / Not valid)3. Cell content (Image files path and static text in that cell area)4. Formats (hierarchy of internal links)5. Images (image file name, type, width, height)6. Links (Link names, Expected off line URL’s)

Syntax:win_check_gui(“Cell logical name”, “checklist file name.ckl”, “expected value file.txt”, time to create);

d. Table:

It is also a non standard object and it consists of a set of non standard properties. These properties are not suitable to conduct URL testing. Test engineers are using these properties for cells coverage during testing.(columns, format, rows & table content).

e. FRAME:

It is also a non standard object and it consists of a set of standard and non standard properties. But test engineers are using non standard properties only for URL’s testing.

1. Broken Link (link name, URL, YES / NO)2. Count objects (no of standard & non standard objects in that fram)3. Format(hierarchy of internal links)4. Frame Content(Static text in web page)

Page 70: testing tools for Beginners

.txt

.txt .htm

5. Images ( image file name, type, width, height)6. Links(Link names, Expected off line URL’s)

Syntax:win_check_gui(“frame logical name”, “checklist file name.ckl”, “expected value file.txt”, time to create);

Note: in general test engineers are conducting URL’s testing at frame level. If a frame consists of huge amounts of links, test engineers are conducting on cell level.

II. Static Text Testing:

To conduct calculations & other text based tests, we can use get text option in create menu. This option consists of 4 sub options when you select web test option in add in manager.

a. From Object / Window :

To capture a web object value in to a variable we can use this option.

Syntax:web_obj_get_text(“web object name”, “# row no”, “#column no, variable”, “text before”, “text after”, time to create)

Example:

Rediff

sum = 0set_window(“rediff”, 5);tbl_get_row_count(“mail box”,n);

-----☺---☺------☺-------

File_compare

Frame content

Mail BoxS.noSubjectDatesize1XX10kb2xx2kb

Total

Xxx kb

Expected:Total = sum of all received mail

sizes

Page 71: testing tools for Beginners

-----☺---American $ xxxx as------------- ☺ Australian $ xxxx as-------------------☺ Indian Rs xxx as-------

Expected:

Indian Rs = American $ value X 45 + Australian $ value X 35

for( i=1, i < n , i++){

tbl_get_cell_data(“mail box”, “#”&i,”#3”,s);s=substr(s,1,length (s)-2);sum = sum + s;

}web_obj_get_text(“total obj”, “#0”, “#0”, tot”, “ “, “kb”, 2);if(tot= =sum)

tl_step(“s1”, 0, “calculation is pass”);else

tl_step(“s1”, 1, “caleculation is fail”);

b. From Screen area:

This function not supports web pages.

c. From Selection:

To capture static text from web pages, we can use this option.Navigation:

Navigation:Create menu → get text → from selection → select required or text → right click to relive → select text before & text after → click ok.

Syntax:web_frame_get_text(“frame logical name”, “variable”, “text before”, “text after”, time to create);

Example:

Shopping

web_frame_get_text(“shopping”, x, “American $”, “as”, 1);

web_frame_get_text(“shopping”, y, “Australian $”, “as”, 1);

web_frame_get_text(“shopping”, x, “Indian Rs”, “as”, 1);

if (z == x * 45 + y * 35)

tl_step(“s1”, “0”, “Test is pass”);

else

tl_step(“s1”, “0”, “Test is pass”);

d) Web Text Check Point:

Page 72: testing tools for Beginners

To verify existence of text in a web page in specified position through text before and text after.

.txt .htm

Example:

obj_get_text(“edit”, x);

web_frame_get_text(“frame logical name”, x, “abc”, “xyz”):

Web Functions:

1. web_link_click ( ):

WinRunner use this function to record a text link operation.

Syntax:web_link_click (“link text” );

2. web_image_click( ):

WinRunner use this function to record an image link operation.

Syntax:web_image_click(“image file name”, x, y);

3. web_browser_invoke( ):

WinRunner use this function to open a web application through test script.

Syntax:web_browser_invoke(I.E / NetScape, “URL” ):

WinRunner 6.0 Vs WinRunner 7.0

WinRunner 7.0 provides below facilities as extra.

→ Auto learning→ Per text Mode→ Selective Recording→ Run Time Record check.→ Web Testing concepts

------- abc--------

--------- xyz --------------

Page 73: testing tools for Beginners

→ GUI spy ( To identify weather the object is recognizable or not)

Note : To stop spying we can use Lctrl + F3.

Page 74: testing tools for Beginners

Testing Documents

Test Policy↓

Test Strategy ↓

Test Methodology↓

Test plan↓

Test Cases↓

Test Procedures↓

Test Scripts↓

Defect Reports↓

Final Test Summary Report

I. TEST POLICY:

This document developed by Quality Control people(Almost Management). In this document QC defines “Testing Objectives”.

XXXXXXXXXXXXXXXXXXXXXXX

Testing Definition : Verification & Validation

Testing Process : Proper planning before starts testing.

Testing Standard : 1 defect per 250 line of coding /1 defect per 10 Functional Point

Testing Measurement : Quality Assessment Measurement, Testing Management Measurement, Process Capability Measurement

XXXXXXX (C.E.O)

II. TEST STRATEGY:

QC

Quality Analyst / Project manager

Test Lead

Test Engineer

Test Lead

Company level

Project level

Page 75: testing tools for Beginners

It is a company level document and developed by Quality Analyst / Project Manager category people. This document defines testing approach.

Components

1. Scope & Objective: Definition & purpose of testing in organization.2. Business Issues: Budget control for testing.

100%

64 36 (Development & Maintenance) (Testing)

3. Test Approach: Mapping between development stages and testing issues.

Develop Stages →

Information Gathering &

AnalysisDesign Coding

System Testing Maintenance

Testing Issues ↓Ease of use X X Depends up

on change request

Authorization

This Matrix is known as “TEST RESPONSIBILITY MATRIX”.

4. Test Deliverables: Required testing documents to be prepared.5. Roles & Responsibilities: Names of jobs in testing team and their responsibilities.6. Communication & Status Reporting: required negotiations between two consecutive jobs in

6testing team.7. Defect Reporting & Tracking: Required negotiations between testing team and development

team during test execution.8. Automation & Testing Tools: Purpose of automation and possibilities to go to test

automation. 9. Testing Measurements & Metrics: QAM, TMM, PCM.10. Risks & Mitigations: What possible problems will come in testing and solutions to over come

them.11. Change & Configuration Measurement: To handle change request during testing.12. Training Plan: Required training secessions to testing team before start testing process.

Testing Issues:

To define a quality software organizations are using 15 testing issues as maximum.

QC Quality↓

QA/PM Test Factor↓

TL Testing Technique↓

TE Test Cases

Page 76: testing tools for Beginners

From the above model a quality software testing process formed with below 15 testing issues.

1. Authorization: Whether user is valid are not to connect to application.2. Access Control: Whether a valid user have permission to use specific service or not.3. Audit Trial: Maintains Metadata about user operations in our applications.4. Continuity of processing: Inter process communication (Module to Module).5. Corrections: Meet customer requirements in terms of functionality.6. Coupling: Co-Existence with other existing software’s to share resources.7. Ease of Use: User friendliness of the screens.8. Ease of Operate: Installation, un-installation, Dumping, Downloading, uploading etc 9. File Integrity: Creation of backup.10. Reliability: Recover from abnormal stage.11. Performance: Speed of processing.12. Portable: Run on different platforms.13. Service Levels: order of functionalities.14. Maintainable: Whether our application build is long time serviceable to customer site people

are not.15. Methodology: Whether our testers are following standards are not during testing.

Test Factors Vs Black Box testing Techniques

1. Authorization → Security Testing→ Functionality Testing

2. Access Control → Security Testing→ Functionality Testing

3. Audit Trial → Functionality Testing→Error handling testing

4. Continuity of processing → White Box→ Execution → operation

5. Corrections → Functionality testing→ Requirements testing

6. Coupling → Inter System testing

7. Ease of Use → User Interface testing → manuals support testing

8. Ease of Operate → Installation testing 9. File Integrity → Functionality testing

→ Recovery Testing

10. Reliability → Recovery Testing (1 user)→ Stress testing (Peak hours)

11. Performance → Load testing→ Stress testing→ Storage testing

Page 77: testing tools for Beginners

→ Data Volume testing

12. Portable → Compatibility testing→ Configuration Testing

13. Service Levels → Functionality Testing→ Stress testing

14. Maintainable → Compliance TestingManagement Level Testing.

15. Methodology → Compliance Testing

III.TEST METHODOLOGY:

It is a project level document. Methodology provides required testing approach to be followed for current project. In this level QA / PM selects possible approaches for corresponding project testing through below procedure.

Step 1: Acquire Test StrategyStep 2: Determine project type.

Type IF gathering & Analysis

Design Coding System Testing

Maintenance

Traditional

Off Shelf X X X X

Maintenance X X X X

Note: Depends up on project type QA/PM decrease number of columns in TRM.

Step 3: Determine Project requirementsNote: Depends on project requirements QA/PM decreases number of rows in TRM.

Step 4: Identifies Scope of applicationNote: Depends on expected future enhancements QA/PM add some of previously deleted rows and columns.

Step5: Identifies tactical risks.Note: Depends on analyzed risks, QA/PM decreases number of selected issues (Rows) in TRM.

Step 6: Finalize TRM for current project

Step7: Prepare system test plan.

Step 8: Prepare modules test plans if required.

Testing Process:

Test Initiation

Test Planning

Test Closer

Test Execution

Test Design

Test Reporting

Receive Build

Regression Defect

Page 78: testing tools for Beginners

PET Process (Process Expert Tools and Techniques) :

It is a refinement form of V model. It defines mapping between development stages and testing stages. From this model organizations are maintaining separate team for functionality and system testing. Remaining stages of testing done by development people. This model developed in HCL and recognized by QA forum of India.

Information Gathering (BRS)↓

Analysis (S/W RS)↓

↓ ↓Design Test Initiation ↓ ↓Coding Test Planning & Training

↓ ↓ Unit & Integration Test Design

↓ Test case selection closer

Initial Build↓

Sanity / Smoke / TAT/ BVT ( Level 0)↓

Test Automation↓

Create test scripts / Test batches / Test Suits ↓

Resolving Select a batch and start execution ↓ Level 1

If a test engineer got a mismatch Independent BatchSuspend that batch

↓Otherwise

↓Test Closer

↓Final Regression / Release testing / Pre Acceptance / Post Mortem (Level 3)

↓User Acceptance Testing

↓Sign OFF

Modified Build

RegressionLevel 2

Defect

ReportingDevelopers

Defect Fixing

Next Batch

Page 79: testing tools for Beginners

IV. TEST PLANNING:

After finalization of possible tests to be applied for corresponding project, test lead category people concentrate on test plan document preparation to define work allocation in terms of “ what to test?”, “Who to test ?”, “when to test ?”, and “How to test ?”.

To prepare test plan documents, test plan author follows below approach

1. Team Formation:

In general, test planning starts with testing team formation. To define a testing team, test plan author depends on below factors.

i. Availability of testersii. Test durationiii. Availability of test environment Resources

Case Study:

Test Duration:- Client / Server or Web or ERP - 3 to 5 months functional & system testing- System S/W - 7 to 9 months functional & system testing- Machine critical - 12 to 15 months functional & system testing

(Robots, satellites etc )- Team Size - 3 : 1 (developers : Testers)

2. Identify Tactical Risks:

After completion testing team formation, test plan author analyses possible risks and mitigations.

Example:

Risk 1 : Lack of knowledge of test engineers on that domain.Risk 2 : Lack of resourcesRisk 3 : Lack of budget ( Time )Risk 4 : Lack of test data ( Some times test engineers are conduction Adhoc testing depends

on past experiences)Risk 5 : Lack of development process rigor (Seriousness)Risk 6 : Delays in deliveryRisk 7 : Lack of communication ( In between Testing team & Test lead / developers / testing

team)

3. Prepare test Plan :

Development Documents

TRM - Review Test Plan

- Team Formation- Identify Tactical Risks- Prepare Test Plan

System Test Plan

Page 80: testing tools for Beginners

After completion of testing team formation and risks analysis, test plan author concentrates on test plan documentation in “IEEE” format.

Format:

1. Test plan ID: Unique Number / Name

2. Introduction: About Project

3. Test Items: Modules / Functions / Services / Features

4. Features to be Tested: Responsible modules for test design.

5. Features not to be tested: Whish ones and why not.

Note: 3-5 What to test?

6. Approach: List of selected techniques to be applied on above specified modules (From

finalized TRM)

7. Feature pass/fail criteria: when a feature is pass and when a feature is fail.

8. Suspension Criteria: Possible abnormal situations raised during above features testing.

9. Test Environment: Require hardware & software to conduct testing on above features.

10. Test deliverables: Required testing documents to de prepared during testing.

11. Test Tasks: Necessary tasks to do before start every feature testing.

Note: 6 –11 How to test ?

12. Staff & Training Need: Names of selected test engineers and training requirements to them.

13. Responsibilities: Work allocation to above selected staff members.

Note : 12 & 13 Who to test?

14. Schedule: Dates & Times

Note: 14 – when to test?

15. Risks & Mitigations: Possible testing level risks and solutions to overcome them.

16. Approvals: Signatures of test plan author and PM/QA.

4. Review Test Plan:

After completion of plan document preparation test plan author conducts a review for completeness and correctness. In this review plan author follows “Coverage Analysis”

BR based coverage (What to test? Review) Risks Based coverage (When and Who to test? Review) TRM based coverage (How to test? Review)

Case Study:

Deliverable Responsibility Completion time

Test Case Selection Test Engineer 30 to 40 days

Page 81: testing tools for Beginners

Test case review Test lead / engineer 4 to 5 days

Requirements Traceability matrix

Test Lead 1 to 2 days

Test Automation (including Sanity testing)

Test engineer 10 to 20 days

Test execution including Regression testing

Test engineer 40 to 60 days

Defect reporting Test engineer / Every one On going

Communication Status reporting

Test Lead Weekly Twice

Test closure & Final Regression

Test Lead / Test Engineer 4 to5 days

User Acceptance TestingCustomer site people / involvement of testing team

4 to 5 days

Sign OFF Test Lead 1 to 2 days

V. Test Design:

After completion of test planning and required training to testing team, corresponding testing team members will prepare list of test cases for their responsible modules. There are three types of test case design methods to cover core level testing (Usability & Functionality testing).

1. Business logic based test case design2. Input Domain based test case design3. User interface base test case design

1. Business logic based test case design:

In general test engineers are writing a set of test cases depends up on use cases in S/W RS. Every use case describes functionality in terms of input, process and output. Depends on this use cases test engineer are writing test cases to validate that functionality.

BRS Test Cases ↓

Use Cases / Functional Specs ↓HLD ↓

LLD’s ↓

Coding( .EXE)

From the above model test engineers are preparing test cases depends on corresponding use cases and every test case defines a test condition to be applied.

To prepare test cases, test engineers study use cases in below approach.

Step 1: Collect use cases of our responsible modules.

Page 82: testing tools for Beginners

Step 2: Select use cases and their dependencies from that list

Determinant Dependent

Step 2.1: Identify entry condition (Base State) Step 2.2: Identify Input required (Test Data)Step 2.3: Identify exit condition (End state)Step 2.4: Identify output and out come (Expected)

Multiply Login Operation

Output Outcome

Step 2.5: Identify normal flow (Navigation)Step 2.6: Identify alternative flows and exceptions (Protocols)

Step 3 : Write test cases depends on above information.Step 4 : Review test cases for completeness and correctness.Step 5 : Go to step 2 until completion of all use cases.

Use Case 1:

A login process allows UID & PWD to validate users. During this validation, login process allows UID as alphanumeric from 4 to 16 characters long and PWD allows alphabets in lower case from 4 to 8 characters long.

Test Case 1: Successful entry of UID.

Test Case 2: Successful entry of PWD

Use Case Use CaseUse Case

Input1

Input 2

Result

XXX

XXX

OK

XXXX

UID

PWD

IN BOXXXX

XXX

OK

BVA(Size)

Min – 4 → PassMax – 16 → PassMin-1 – 3 → FailMin+1 – 5 → PassMax-1 - 15 → PassMax+1– 17 → Fail

ECP(TYPE)

Valid Invalida – z, A- Z, 0 – 9

Special characters

Blank

BVA(Size)

Min – 4 → PassMax – 8 → PassMin-1 – 3 → FailMin+1 – 5 → PassMax-1 - 7 → PassMax+1– 9 → Fail

ECP(TYPE)

Valid Invalida – z A- Z,

0 – 9Special characters

Blank

Page 83: testing tools for Beginners

Test Case 3: Successful login operation

UID PWD CriteriaValid Valid PassValid In valid FailIn Valid Valid FailValue Blank FailBlank Value Fail

Use Case 2 :

In a shopping application user can apply for different purchase orders. Every purchase orders allows item selection number and entry of qty up to 10. System returns one item price and total amount depends on given quantity.

Test Case 1: Successful Selection of item number.Test Case 2: Successful Entry of QTY

Test Case 3: Successful Calculation, Total = Price X QTY

Use Case 3:

In an insurance application, user can apply for different types of insurance policies.When they select insurance type as B, system asks age of that customer. The age should be > 18 years and < 60 years.

Test Case 1: Successful selection of type B insurance.Test Case 2: Successful focus to ageTest Case 3: Successful entry of age

BVA(range)

Min – 1 → PassMax – 10 → PassMin-1 – 0 → FailMin+1 – 2 → PassMax-1 - 9 → PassMax+1– 11 → Fail

ECP(TYPE)

Valid Invalid0 – 9

A- Z, a - zSpecial charactersBlank

BVA(range)

Min – 19 → PassMax – 59 → PassMin-1 – 18 → FailMin+1 – 20 → PassMax-1 - 58 → PassMax+1– 60 → Fail

ECP(TYPE)

Valid Invalid0 – 9

A- Z, a - zSpecial charactersBlank

Page 84: testing tools for Beginners

Use Case 4:

A door opens when a person comes in front of door. A door closed when a person come in.

Test Case 1: Successful door opens, when person comes in front of door.Test Case 2: Unsuccessful door open due to absence of the person in front of the door.Test Case 3: Successful door closing after person get in.Test Case 4: Unsuccessful door closing due to person standing at the door.

Use Case 5: Prepare test cases for washing machine operation.

Test Case 1: Successful power supply.Test Case 2: Successful door openTest Case 3: Successfully filling water.Test Case 4: Successful drooping of detergentTest Case 5: Successful filling of clothsTest Case 6: Successful door closingTest Case 7: Unsuccessful door close due to over flow of clothsTest Case 8: Successful selection of washing settingsTest Case 9: Successful washing operationTest Case 10: Unsuccessful washing due to wrong settingsTest Case 11: Unsuccessful washing due to lack of powerTest Case 12: Unsuccessful washing due to lack of waterTest Case 13: Unsuccessful washing due to water leakageTest Case 14: Unsuccessful washing due to door open in the middle of the processTest Case 15: Unsuccessful washing due to machinery problemTest Case 16: Successful Dry cloths

Use Case 6:Prepare test case for money withdrawal from ATM.

Test Case 1: Successful insertion of cardTest Case 2: Unsuccessful operation due to wrong angle of card insertionTest Case 4: Unsuccessful operation due to invalid cardTest Case 4: Successful entry of pin number

Test Case 5: Unsuccessful operation due to entry of wrong pin no three timesTest Case 6: Successful selection of languageTest Case 7: Successful selection of account typeTest Case 8: Unsuccessful operation due to invalid account type selection

Test Case 9: Successful selection of withdrawal optionTest Case 10: Successful entry of amount

Page 85: testing tools for Beginners

Test Case 11: Unsuccessful operation due to wrong denominationsTest Case 12: Successful withdrawal (Correct amount, Right receipt and card come back)Test Case 13: Unsuccessful withdrawal due to amount > possible balance.Test Case 14: Unsuccessful withdrawal due to amount > Day limit (Including Multiple

transactions)Test Case 15: Unsuccessful transaction due to lack of amount in ATMTest Case 16: Unsuccessful due to server failureTest Case 17: Unsuccessful due to click cancel after insert cardTest Case 18: Unsuccessful due to click cancel after insert card & PINTest Case 19: Unsuccessful due to click cancel after insert card, PIN & language selectionTest Case 20: Unsuccessful due to click cancel after insert card, PIN, language & account type

selectionTest Case 21: Unsuccessful due to click cancel after insert card, PIN, language, account type &

amount selection

Use Case 7:

In an E-Banking application users can connect to bank server using his personnel computers. In this login process user can use below fields.

Password → 6 digit noArea code → 3 digit no, allows blankPrefix → 3 digit no, does not begins with 0 or 1.Suffix → 6 digit alphanumericCommands → Check deposit, Money transfer, Bill pay and Mini statement.

Test Case 1: Successful entry of password.

Test Case 2: Successful entry of area code

BVA(Size)

Min = Max = 6 → PassMin-1 – 5 → FailMin+1 – 7 → Fail

ECP(TYPE)

Valid Invalid0 – 9

A- Z, a - zSpecial charactersBlank

BVA(Size)

Min = Max = 3 → PassMin-1 – 2 → FailMin+1 – 4 → Fail

ECP(TYPE)

Valid Invalid0 – 9 Blank

A- Z, a - zSpecial characters

Page 86: testing tools for Beginners

Test Case 3: Successful entry of prefix

Test Case 4: Successful entry of suffix

Test Case 5: Successful selection of commands such as check deposit, money transfer, bills pay and mini statement.

Test Case 6: Successful connect to bank server with all valid valuesTest Case 7: Successful connect to bank server with out filling area code.Test Case 8: Unsuccessful operation due to with out filling all fields except area code.

Test Case Format:

During test design test engineers are writing list of test cases in IEEE format.

1. Test Case ID : Unique number or name2. Test Case Name : The name of test condition to be tested.3. Features to be Tested : Module / Function / Feature4. Test Suit ID : Batch ID in which this case is a member.5. Priority : Importance of test case

P0 : Basic FunctionalityP1 : General Function (I/P domain, Error handling, Compatibility, Inter systems etc) P2 : Cosmetic (User Interface)

6. Test Environment : Required Hardware and software to executive this test case.

BVA(Range)

Min – 200 → PassMax – 999 → PassMin-1 – 199 → FailMin+1 – 201 → PassMax-1 - 998 → PassMax+1– 1000 → Fail

ECP(TYPE)

Valid Invalid0 – 9

A- Z, a - zSpecial charactersBlank

BVA(Size)

Min = Max = 6 → PassMin-1 – 5 → FailMin+1 – 7 → Fail

ECP(TYPE)

Valid Invalid0 – 9A- Z, a – z

BlankSpecial characters

Page 87: testing tools for Beginners

7. Test Effort(Person/hr) : Time to executive this test case (Ex : 20 mts max)8. Test Duration : Date & Time9. Test Setup : Required testing tasks to do before starts this case execution.10. Test Procedure : Step by step procedure to executive this test case.

Format:

Step No Action I/P required Expected Actual Result Comments

Test Design Test Execution

11. Test Case Pass/Fail Criteria: When this case is pass and when this case is fail.

Note: In general test engineers are writing list of test cases along with step by step procedure only.

Example: Prepare test procedure for below test case. Successful file save in note pad.

Step No

Action I/P required Expected

1 Open note pad - Empty Editor

2 Fill with text - Save icon enabled

3 Click save icon - Save window appears

4Enter file name & click save

Unique File name

File name appears in title bar of editor

Example2: Prepare test scenario with expected for below test case.“Successful Mail reply” in Yahoo.

Step No

ActionI/P required

Expected

1 Login to siteValid UID Valid PWD

Inbox appears

2 Click Inbox - Mail box appears

3 Click Mail Subject - Mail Message Appears

4 Click Reply -

Compose Window appears withTo: Received Mail IDSub: Received mail SubjectCC: OffBCC: OffMSG: Received message with comments.

5Type New massage and click send

-Acknowledgement from WEB server

Page 88: testing tools for Beginners

2. Input Domain Based Test Case Design:

In general test engineers are writing maximum test cases depend on use cases / functional specs in S/W RS. These functional specifications provide functional descriptions with inputs, outputs and process. But they are not responsible to provide information about size and type of input objects. To collect this type of information test engineers study “Data Modal” of responsible modules (E-R Diagrams in LLD’s)

During data model study, test engineer follows below approach.

Step 1: Collect data model of responsible modulesStep 2: Study every input attribute in terms of size, type and constraints. Step 3: Identify critical attributes in that list, which participated in manipulations and retrievals.Step 4: Identify non-critical attributes such as just input, output type.

Example:

A/C NoCritical A/C Name

Balance Non CriticalA/C Orders

Step 5: Prepare BVA and ECP for every input object.

I/P AttributeECP BVA(Size / Range)

Valid Invalid Min Max

Note: In general test engineers are preparing step by step procedure based test cases for functionality testing. Test engineers prepare valid and invalid table based test cases for input domain of object testing.

Case Study:

Prepare test cases with required documentation depends on below scenario.

In a bank automation Software, fixed deposit is functionality. Bank employee operates the functionality with below inputs.

Customer Name → Alphabets in lower case. Amount → Rs 1500 to 100000.00 Tenure → Up to 12 months Interest → Numeric With decimal

From functional specification (Use Cases), if tenure is > 10 months interest must > 10%.

Test Case 1:

Test Case ID: TC_FD_1 Test Case Name: Successful Entry of customer Name

Data Matrix:

Page 89: testing tools for Beginners

I/P AttributeECP BVA(Size)

Valid Invalid Min Max

Customer Namea to z

A to Z0 to 9

Special Characters & Blank

1 characters 256 characters

Test Case 2:

Test Case ID: TC_FD_2 Test Case Name: Successful Entry of Amount

Data Matrix:

I/P AttributeECP BVA(Range)

Valid Invalid Min Max

Amount0-9

A to Za to z

Special Characters &

Blank

1500 100000

Test Case 3:

Test Case ID: TC_FD_3 Test Case Name: Successful Entry of Tenure

Data Matrix:

I/P AttributeECP BVA(Range)

Valid Invalid Min Max

Tenure0-9

A to Za to z

Special Characters &

Blank

1 12

Test Case 4:

Test Case ID: TC_FD_4 Test Case Name: Successful Entry of Interest

Data Matrix:

I/P AttributeECP BVA(Range)

Valid Invalid Min Max

Interest0-9

With Decimal

A to Za to z

Special Characters &

Blank

1 100

Test Case 5:

Page 90: testing tools for Beginners

Test Case ID: TC_FD_5 Test Case Name: Successful fixed deposit operation

Test Procedure:Step No

Action I/P required Expected

1 Login to bank Software Valid ID Menu Appears

2 Select Fixed Deposit - FD form Appears

3Fill all fields and click OK

All valid

Any in valid

Acknowledgement from bank serverError message from bank server

Test Case 6:

Test Case ID: TC_FD_6 Test Case Name: Unsuccessful fixed deposit operation due to Time > 10 months & Interest < 10%

Test Procedure:Step No

Action I/P required Expected

1 Login to bank Software Valid ID Menu Appears

2 Select Fixed Deposit - FD form Appears

3Fill all fields and click OK

Valid customer Name, Amount and Time > 10 with interest >10

Valid customer Name, Amount and Time > 10 with interest <10

Acknowledgement from bank server

Error message from bank server

Test Case 7:

Test Case ID: TC_FD_7 Test Case Name: Unsuccessful fixed deposit operation due to with out filling all fields.

Test Procedure:Step No

Action I/P required Expected

1 Login to bank Software Valid ID Menu Appears

2 Select Fixed Deposit - FD form Appears

3 Fill all fields and click OK

Valid customer Name, Amount

Error message from bank server

Page 91: testing tools for Beginners

$

--/--/-- --/--/--

and Time interest. But some as blank

Note: Test case 0 – 4 → I/P domainTest case 5 – 6 → FunctionalityTest case 7 → Error handling

3. User Interface Based Test Case Design:

To conduct usability testing test engineers writing a list of test cases depends on our organisation user interface conventions, Global interface rules and Interest of customer site people.

Examples:

Test Case 1: Spell CheckTest Case2: Graphics check (Screen level alignment, font, style, colour, size(object width and height) and Microsoft 6 rules)Test Case 3: Meaningful error messages.Test Case 4: Accuracy of data displayed

Amount Amount

DOB DOB DOB (DD/MM/YY)

Test Case 5: Accuracy of data in data base as a result of user inputs.

Test Case 6:

Accuracy of data in a data base as a result of external factors.

Example: File attachments. Greetings one year

Test Case 7: Meaning full Help menus (Manual Support testing).

Review Test Cases:

After completion of all possible test cases writing for responsible modules, testing team concentrates on review of test cases for completeness and correctness. In this review testing team applies coverage analysis.

→ BR based coverage→ Use case based coverage→ Data modal based coverage→ User Interface based coverage→ Test Responsibility based coverage

At the end of this review test lead prepare “Requirements Tracability Matrix” or “Requirements Validation Matrix".

10.768 10.77 10.77

Form Data Base Table Report

Page 92: testing tools for Beginners

Business Requirements Sources(Use cases, Data model etc)

Test Cases

XXXXXXX(Login):::::::

XXXXXXXX(Mail Open)

XXXXXXXXX( Mail Compose)

XXXXXXXXX(Mail Reply)::

XXXXXXXXXXXXXXXXXXXXXXXXX

XXXXXXXXXXXXXX

XXXXXXXXXXXXXXXXXXXXX

From the above model tracebility matrix defines mapping between customer requirements and prepared test cases to validate that requirements.

IV. TEST EXECUTION:

After completion of test cases selection & their review, testing team concentrates on build release from development and test execution on the build.

1. Test Execution Levels / Phases:

Development Testing

Stable Build Level 0 (Sanity / TAT / BVT)

↓ Test Automation

Defect Fixing Defect Reporting Level 1 (Compressive)

Defect Resolving Modified Build Level 2 (Regression)

Level 3 (Final Regression)

2. Test Execution Levels Vs Test Cases:

Level 0 → P0 test casesLevel 1 → All P0, P1 and P2 test cases as batchesLevel 2 → Selected P0, P1 and P2 test cases w.r.t modifications

Page 93: testing tools for Beginners

Level 3 → Selected P0, P1 and P2 test cases w.r.t critical areas in the master build.

3. Build Version Control:

In general test engineers are receiving build from development in below modes.

Build → Server Soft Base

↓ FTP(File Transfer Protocol)

From the above approach test engineers are dumping application build from server to local host through FTP. Soft Base means that collections of software’s.

During test execution test engineers are receiving modified builds from soft base. To distinguish old builds & new build, development team gives unique version no in system, which is understandable to testers.

For this version controlling, developers are using version control tools also.(Ex: VSS(Visual Source safe)

4. Level 0: ( Sanity / TAT / BVT)

After receiving initial build test engineers concentrate on Basic functionality of that build, to estimate satiability for complete testing. In this sanity testing test engineers try to execute all P0 test cases to cover basic functionality. If functionality not working or functionality is missing testing team reject that build. if testers decided stability, they concentrate on test execution of all test cases to detect defects.

During this sanity testing, test engineers observe below factors on the build.

→ Understandable→ Operatable→ Consistency→ Controllable→ Simplicity→ Maintainable→ Automatable

From the above 8 testable issues sanity testing is also known as Testability Testing or Octangle Testing.

5. Test Automation:

Testers

Test Environment

Page 94: testing tools for Beginners

If test automation is possible then testing team concentrate on test scripts creation using corresponding testing tool. Every test script consists of navigational statements along with checkpoints.

Stable Build

↑Test Automation

(Selective Automation)(All P0 and Carefully selected P1 test cases)

6. Level 1(Comprehensive Testing) :

After completion of sanity testing and possible test automation, testing team concentrates on test batches formation with dependent test cases. Test batch is also known as test suit or test set. During these test batches execution, test engineer prepare test log document this document consists of three types of entries.

→ Passed – All expected = Actual→ Failed – Any one expected != Actual→ Blocked – Corresponding parent functionality failed.

Comprehensive Test Cycles

7. Level 2 (Regression Testing):

During comprehensive test execution, test engineers are reporting mismatches as defects to developers. After receiving modified build from them, test engineers concentrate on regression testing to ensure bug fixing work and occurrences of side effects.

Resolved Bug Severity

High Medium LowAll P0 All P0 Some P0All P1 Carefully selected P1 Some P1

Carefully selected P2 Carefully selected P2 Some P2

On Modified Build

Case 1:

In queue In Progress Failed

SkipPassed

BlockedPartial

Pass / Fail

Closed

1

2

3

4

5

Page 95: testing tools for Beginners

If development team resolved bug impact (Severity) is high, test engineers re execute all P0, P1 and carefully selected P2 test cases on that modified Build.

Case 2:

If development team resolved bug impact (Severity) is medium, test engineers re execute all P0, carefully selected P1 and some of P2 test cases on that modified Build.

Case 3:

If development team resolved bug impact (Severity) is low, test engineers re some of P0, P1 and P2 test cases on that modified Build.

Case 4:

If development team released modified build due to sudden changes in project requirements, test engineers re execute all P0, all P1 and Carefully selected P2 test cases w.r.t that requirements modifications.

VII. TEST REPORTING:

During comprehensive testing, test engineers are reporting mismatches as defects to developers through IEEE format.

1. Defect ID : Unique Number / Name2. Description : Summary of defect3. Feature : Module/Function/Service (In this module test engineers found this defect) 4. Test Case Name : Corresponding failed test condition5. Reproducible : Yes / NO (Yes – every time Defect appears, NO – Rarely defect appears)6. If Yes : Attach test Procedure7. If NO : Attach snap shot and strong reasons.8. Status : New / Reopen (New – Defect appears first time, Reopen – Reappearance

of the defect once closed)9. Severity : Seriousness of defect w.r.t functionality

High→ With out resolving that defect test engineer is not able to continue testing.(Snow Stopper)

Medium→ Able to continue testing but mandatory to resolve.

Low→ May or may not resolve10. Priority : Importance of the defect w.r.t customer(High, medium, low)11. Reported by : Name of test engineer12. Reported on : Date of submission13. Assigned to : Name of responsible person in development side(PM)14. Build Version ID : In which version of build test engineer found this defect.15. Suggested Fix : Tester try to produce suggestions to solve this defect(Optional)

_ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

By Developers16. Fixed By : PM / Team Lead17. Resolved By : Programmers Name18. Resolved On : Date of resolving

Page 96: testing tools for Beginners

If high severity defect rejected

19. Resolution Type :20. Approved By : Sign of PM

Defect Age:

The time gap between “ Reported on” and “Resolved On”.

Defect Submission Process:

Large-Scale Organisations

QA Test Manager Project manager

↑ ↓Test Lead Team Lead ↑ ↓

Test Engineer Developer

Transmittal Reports

Medium & Small-Scale Organisations

Project Manager

Test Lead Team Lead ↑ ↓

Test Engineer Developer

Transmittal Reports

Defect Status Cycle:

New ↓

Open / Rejected / Deferred (defect accepted but not interested to resolve

in this version) ↓

Closed ↓

Reopen

Defect Life Cycle / Bug Life Cycle:

Detect Defect↓

Page 97: testing tools for Beginners

Reproduce Defect↓

Report Defect↓

Fix Defect↓

Resolve Defect↓

Close Defect

Defect Resolution Type:

After receiving defect reports from testers, developers reviews that defect and send resolution type to testers as reply.

1. Duplicate: Rejected due to this defect same as previously reported defect.2. Enhancement: Rejected due to this defect related to future requirement of customer.3. Hardware Limitations: Rejected due to this defect raised w.r.t limitations of Hardware

devices.4. Software Limitations: Rejected due to this defect raised w.r.t limitations of Software

technologies.5. Not Applicable: Rejected due to no proper meaning to this defect.6. Function as Designed: Rejected due to coding is correct w.r.t design document.7. Need More Information: Not accepted and not rejected but developer requires extra

information to understand that defect.8. Not Reproducible: Not accepted and not rejected but developer requires correct procedure to

reproduce that defect.9. No Plan to Fix it: Not accepted and not rejected but they want extra time to fix.10. Fixed: Developer accepted as to be resolved.11. Fixed Indirectly: Accepted but not interested to resolve in this version (Deferred).12. User Misunderstanding: Extra negotiations between testing and development teams.

Page 98: testing tools for Beginners

Types of Defects:

1. User Interface Bugs: Low Severity

Ex 1: Spelling Mistake → High PriorityEx 2: Improper alignment → low priority

2. Boundary Related Bugs: Medium Severity

Ex 1: Does not allow valid type → High PriorityEx 2: Allows invalid type also → Low Priority

3. Error Handling Bugs: Medium Severity

Ex 1: Does not providing error massage window → High PriorityEx 2: Improper meaning of error massages → Low Priority

4. Calculation Bugs: High Severity

Ex 1: Final output is wrong → low priorityEx 2: Dependent results are wrong → high priority

5. Race Condition Bugs: High Severity

Ex 1: Dead Lock → High PriorityEx 2: Improper order of services → Low Priority

6. Load Condition Bugs: High Severity

Ex 1: Does not allow multiple users to operate → High PriorityEx 2: Does not allow customer expected load → Low Priority

7. Hardware Bugs: High Severity

Ex 1: Does not handle device → High PriorityEx 2: Wrong output from device → Low Priority

8. ID Control Bugs: Medium Severity

Ex 1: Logo missing, wrong logo, version no mistake, copyright window missing, developers name missing, tester names missing.

9. Version Control Bugs: Medium Severity

Ex: Difference between two consequitive build versions.

10. Source Bugs: Medium Severity

Ex: Mistakes in help documents.

VIII. Test Closer:

Page 99: testing tools for Beginners

After completion of all possible test cycles executions, test lead conducts a review to estimate completeness & correctness of testing. In this review test lead follow below factors along with test engineer.

1. Coverage Analysis:

→ BR based coverage→ Use case based coverage→ Data modal based coverage→ UI based coverage→ TRM based coverage

2. Bug Density:

Ex: A → 20%B → 20%C → 40 % ← Final RegressionD → 20%

3. Analysis of Differed Bugs:

Whether differed bugs are deferrable or not?

At the end of this review, testing team concentrates on final regression testing on high bug density modules if time is available.

Level 3: (Final Regression / Pre Acceptance Testing)

IX. User Acceptance Testing:

After completion of final regression cycles, our organisation management concentrates on user acceptance testing to collect feedback. There are two approaches to conduct this testing such as - test and - test.

X. Sign OFF:After completion of user acceptance testing and their modifications, test lead concentrates on final test summary report creation. It is a part of software release note. This final test summary report consists of below documents.

Gather Regression Requiremen

tsEffort

Estimation

Plan Regression

Final Regression

Test Reporting

Page 100: testing tools for Beginners

→ Test Strategy / Methodology (TRM) → System Test Plan→ Requirements Tracebility matrix→ Automated Test Scripts→ Bugs Summary Report

BUG Description

Feature Found By Severity Status (Closed/Differed)

Commants

Page 101: testing tools for Beginners

Auditing:

To audit testing process Quality people three types of measurements & Metrics.

1. QAM (Quality Assessment Measurement):

These measurements used by quality analysts / PM during testing process(Monthly once).

Stability:

y

0 x

20% testing → 80% defects80% testing → 20% defects

Sufficiency:

→ Requirements Coverage→Type-Trigger analysis

Defect Severity Distribution:

→ Organization – Trend limit check.

2. TMM (Test Management Measurement):

These measurements used by test lead during testing process (weekly twice).

Test Status:

→ Completed→ In progress→ Yet to execute

Delays in Delivery:

→ Defect arrival rate→ Defect resolution rate→ Defect age

Test Efficiency:

→ Cost to find a defect ( No of defects / Person-Day)3. PCM (Process Capability Measurement):

No of defects

Defect arrival Rate

Time

Page 102: testing tools for Beginners

These measurements used by project management to improve capability of testing process depends on feed back of customer in existing maintenance software’s.

Test Effectiveness:

→ Requirements Coverage→Type-Trigger analysis

Defect Escapes (Missed defects):

→ Type – Phase analyses

Test Efficiency:

→ Cost to find a defect ( No of defects / Person-Day)