software testing v0.1 - 123seminarsonly.com · concurrency testing conformance testing context...

218
A Practitioner's Guide to Software Testing Page 1 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved A PRACTITIONER'S GUIDE TO SOFTWARE TESTING Prepared By : Mahesh D. I [[email protected]] In association with: Prem Kumar N [[email protected]]

Upload: others

Post on 01-Sep-2019

63 views

Category:

Documents


2 download

TRANSCRIPT

A Practitioner's Guide to Software Testing

Page 1 of 218

Copyright © 02004 by Mahesh D. I. All Rights Reserved

A PRACTITIONER'S GUIDE TO

SOFTWARE TESTING

Prepared By :

Mahesh D. I [[email protected]] In association with: Prem Kumar N [[email protected]]

A Practitioner's Guide to Software Testing

Page 2 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

CONTENTS

1 SOFTWARE DEVELOPMENT LIFE CYCLE

2 TESTING

3 TESTING LIFE CYCLE

Rough Picture of Testing Life Cycle

Life Cycle Models

Waterfall Model

Prototyping Life Cycle

Phased Life Cycle Models:

Incremental Development: Evolutionary development V – Model

Parallel Testing

End Phase Testing

WHAT DOES TESTING PROVES?

VERIFICATION

VALIDATION

4 DEFECTS

5 SOFTWARE RELIABILITY

Failure

Fault / Defect /Bug

Debugging

Bugs severity

Bug Life Cycle

6 TESTING CONCEPTS

TYPES OF TESTING

Black Box Testing White Box Testing Grey Box Testing

LEVELS OF TESTING

Unit Testing Integration Testing System Testing Acceptance Testing Regression Testing

TEST STRATEGY

TEST STRATEGY FORMAT

TEST PLAN

System Test Plan Integration Test Plan

Unit Test Plan

A Practitioner's Guide to Software Testing

Page 3 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

TEST CASE

6.3.3.1 System Test Cases

6.3.3.2 Integration Test Cases

6.3.3.3 Unit Test Cases

TEST METRICS

6.3.4.1 Types of Metrics

TESTING PROCESS

6.3.5.1 Input

6.3.5.2 Entry Criteria

6.3.5.3 Validation Criteria

INPUTS AND OUTPUTS FOR THE TEST PROCESS

TEST COVERAGE

7 MANUAL TESTING

8 AUTOMATE TESTING

9 SOFTWARE TESTING TYPES

Acceptance Testing Accessibility Testing Ad Hoc Testing Agile Testing Alpha Testing Application Binary Interface (ABI) Application Programming Interface (API) Automated Software Quality (ASQ) Automated Testing Backus-Naur Form Basic Block Basis Path Testing Basis Set Baseline Beta Testing Binary Portability Testing Black Box Testing Bottom Up Testing Boundary Testing Bug Boundary Value Analysis Branch Testing Breadth Testing CAST Capture/Replay Tool CMM Cause Effect Graph Code Complete Code Coverage Code Inspection Code Walkthrough Coding Compatibility Testing

A Practitioner's Guide to Software Testing

Page 4 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Component Component Testing Concurrency Testing Conformance Testing Context Driven Testing Conversion Testing Cyclomatic Complexity Data Dictionary Data Flow Diagram Data Driven Testing Debugging Defect Dependency Testing Depth Testing Dynamic Testing Emulator Endurance Testing End-to-End testing Equivalence Class Equivalence Partitioning Exhaustive Testing Functional Decomposition Functional Specification Functional Testing Glass Box Testing Gorilla Testing Gray Box Testing High Order Tests Independent Test Group (ITG) Inspection Integration Testing Installation Testing Load Testing Localization Testing Loop Testing Metric Monkey Testing Negative Testing N+1 Testing Path Testing Performance Testing Positive Testing Quality Assurance Quality Audit Quality Circle Quality Control Quality Management Quality Policy Quality System Race Condition Ramp Testing Recovery Testing Regression Testing

A Practitioner's Guide to Software Testing

Page 5 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Release Candidate Sanity Testing Scalability Testing Security Testing Smoke Testing Soak Testing Software Requirements Specification Software Testing Static Analysis Static Analyzer Static Testing Storage Testing Stress Testing Structural Testing System Testing Testability Testing Test Automation Test Bed Test Case Test Driven Development Test Driver Test Environment Test First Design Test Harness Test Plan Test Procedure Test Script Test Specification Test Suite Test Tools Thread Testing Top Down Testing Total Quality Management Traceability Matrix Usability Testing Use Case User Acceptance Testing Unit Testing Validation Verification Volume Testing Walkthrough White Box Testing Workflow Testing

A Practitioner's Guide to Software Testing

Page 6 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

10 MANDATORY TESTING TYPES AND THERE PURPOSE WITH METHODOLOGY

Functionality Testing

Compatibility

Website Testing

Performance Testing

Automated Testing

Regression Testing

11. TESTING THE MULTI-PLATFORM SYSTEM

12 WINDOWS COMPLIANCE TESTING

For Each Application

For Each Window in the Application

Text Boxes

Option (Radio Buttons)

Check Boxes

Command Buttons

Drop Down List Boxes

Combo Boxes

List Boxes

SCREEN VALIDATION CHECKLIST

AESTHETIC CONDITIONS

VALIDATION CONDITIONS

NAVIGATION CONDITIONS

USABILITY CONDITIONS

DATA INTEGRITY CONDITIONS

MODES (EDITABLE READ-ONLY) CONDITIONS

GENERAL CONDITIONS

SPECIFIC FIELD TESTS

Date Field Checks

Numeric Fields

Alpha Field Checks

VALIDATION TESTING - STANDARD ACTIONS

13 CLIENT/SERVER PERFORMANCE TESTING PROCESS

Performance Testing Objectives

Pre-Requisites for Performance Testing

Quantitative, Relevant, Measurable, Realistic, Achievable Requirements

Stable System

Realistic Test Environment

Controlled Test Environment

Performance Testing Toolkit

Performance Requirements

Response Time Requirements

Load Profiles

Database Volumes

A Practitioner's Guide to Software Testing

Page 7 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Process Specification

Preparation

Execution

Analysis

Tuning

Incremental Test Development

Interim tests can provide useful results

Test Execution

Results Analysis and Reporting

The Risk-Based Testing Process

14. WEB TESTING

User Interface

Instructions

Site map or navigational bar

Content

Colors/backgrounds

Images

Tables

Wrap-around

FUNCTIONALITY

INTERFACE TESTING

TEST CYCLES ARE FREQUENT WITH WEB SITES AND WEB APPLICATIONS

WEBSITE COOKIE TESTING

Stateless, Stateful Systems

The Stateless HTTP

To State or Not To State on the Web

Maintaining State with Cookies

Per-Session Cookies and Cookie Expiration

Cookie Detective Work

Cookie Analysis

COOKIE TESTING

Disabling Cookies

Selectively Rejecting Cookies

Corrupting Cookies

Cookie Encryption

A Practitioner's Guide to Software Testing

Page 8 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

15. NETWORK TESTING

Mission Critical Systems for Initial Testing

Network Security Testing

Operational Security Testing

Vulnerability

SECURITY TESTING AND THE SYSTEM DEVELOPMENT LIFE CYCLEND

SYSTEM DEVELOPMENT LIFE CYCLE

Initiation

Development and Acquisition

Implementation and Installation

Operational and Maintenance

Disposal

Implementation Stage

Operational Stage

Documenting Security Testing Results

SECURITY TESTING TECHNIQUES

Roles and Responsibilities for Testing

General Information Security Principles

Summary Comparisons of Network testing Techniques

16. AUTOMATE TESTING

What is "Automated Testing"?

Who Should Automate Tests?

Choosing What To Automate

Cost-Effective Automated Testing

The Record/Playback Myth

Software Test Automation and the Product Life Cycle

The Product Life Cycle

Design Phase

Code Complete Phase

Automation Checklist

Alpha Phase

Beta Phase

Zero Defect Build Phase

Green Master

Understanding the PLC will help you select your automation tools

AUTOMATE TESTING OF GRAPHICAL USER INTERFACES

A Practitioner's Guide to Software Testing

Page 9 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Capture-and-Replay Tools

Capture mode

Programming

Checkpoints

Replay mode

Testing of the GUI

Traps when Accessing GUI-Objects

Traps when Testing Functionality

Traps of Style Guide Testing

Making GUI Test-Automation Work

GUI Test Specification

GUI Test Implementation

Building a Test Case Library

Measurements of Expenditures

TEST AUTOMATION SUCCESS AND FAILURE

What is Failure?

REDUCED TESTING TIME

CONSISTENT TEST PROCEDURES

REDUCED QA COSTS

IMPROVED TESTING PRODUCTIVITY

IMPROVED PRODUCT QUALITY

What is Success?

Automate Testing Success Factors

Goals

Readiness to Automate

Role of the Automated Testing Team

Automated Test System Architecture

Method of Script Creation

Method of Verification

Automation Programming Practices

TEST AUTOMATION MYTHS AND FACTS

Myths & Facts

Find more bugs

Eliminate or reduce manual testers

A Practitioner's Guide to Software Testing

Page 10 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

MAINTAINABILITY OF AUTOMATED TEST SUITES

Problem

Strategies for Success Reset management expectations about the timing of benefits from automation

Recognize that test automation development is software development.

Use a data-driven architecture.

Use a framework-based architecture.

Recognize staffing realities.

Consider using other types of automation.

STEP APPROACH TO TEST TOOL EVALUATION

Compatibility Issues

Budget Constraints

Business Requirements

FEATURES THAT ARE IMPORTANT IN ANY GOOD TOOL

Scripting language

UI element identifiers

Reusable libraries

Outside libraries

Abstract layers

Distributed tests

File I/O

Database testing

Error handling

The debugger

Source control

Command line script execution

The user community

17 RATIONAL ROBOT [AT]

What is Rational Robot?

Creating a script

Name(s) of Test Scripts

Recording

Verification Points

Types of Verification Points

Creating a wait state

Working with the Data in Data Grids

Some Useful SQA Basic Scripting Commands

Using Wildcards in Window Captions

Editing, Compiling, and Debugging Scripts

Compiling a script

Locating Compilation Errors

A Practitioner's Guide to Software Testing

Page 11 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Debugging GUI Scripts

Setting and Clearing Breakpoints

Examining Variable Values

Library Source Files

18 RATIONAL TEST MANAGER [AT]

What is Rational Test Manager? What is a Virtual Tester?

Planning Tests

Test Inputs

Test Plans, Test Folders and Test Cases

Building a Test Hierarchy in Test Manager

The Test Plan

The Test Case Folder

Test Cases

Iterations and Test Assets

Associating Test Assets

Designing Tests

Implementing Tests

Creating Suites

Opening Suites

Executing Tests

Running Automated Test Scripts

The progress bar and default views

Evaluating Tests

Submitting Defects

Reporting Results 18 RATIONAL ADMINISTRATOR[AT]

Managing the rational repositories with the Administrator

Use the Administrator to

20 RATIONAL LOG VIEW [AT]

21 RATIONAL SITE CHECK [AT]

Visualize the structure of Web site Identify and analyze Web pages with active content Filter information Examine and edit the source code Update and repair files Perform comprehensive testing of secure Web site

A Practitioner's Guide to Software Testing

Page 12 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

22 WINRUNNER [AT]

WinRunner Testing Modes

Context Sensitive

Analog

The WinRunner Testing Process

Create Tests

Create GUI Map

Debug Tests

Run Tests

Report Defects

View Results

WinRunner Q & A

23 LOAD RUNNER [AT] Client/Server Load Testing

Manual Testing Limitations

The LoadRunner Solution

Using LoadRunner

LoadRunner Vuser Technology

LoadRunner Vuser Types

GUI Vusers

Database Vusers

RTE Vusers

Working with LoadRunner Planning the Test

Creating the Vuser scripts

Creating the Scenario

Running the Scenario

Analyzing Test Results 23 TEST DIRECTOR [AT]

ABOUT TESTDIRECTOR

HOW TESTDIRECTOR WORKS

Requirements Management

Planning Tests

Scheduling and Running Tests

Defect Management

Graphs and Reports

FEATURES AND BENEFITS

Supports the Entire Testing Process

Provides Anytime, Anywhere Access to Testing Assets

A Practitioner's Guide to Software Testing

Page 13 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Provides Traceability Throughout the Testing Process

Integrates with Third-Party Applications

Manages Manual and Automated Tests

Accelerates Testing Cycles

Facilitates Consistent and Repetitive Testing Process

Provides Analysis and Decision Support Tools 24 QUALITY ASSURANCE

DIFFERENCE BETWEEN QUALITY ASSURANCE AND QUALITY CONTROL

ABOUT 9001: 9004

ISO 9000 BASIC

THE NEED FOR A QUALITY SYSTEMS STANDARD

INTRODUCING THE CAPABILITY MATURITY MODEL

SOURCES OF THE CMM

STRUCTURE OF THE CMM

DEFINITION OF THE CMM MATURITY LEVELS

SEI – CMM BASIC

The Process Maturity Framework

The Five Levels of Software Process Maturity

Operational Definition of the Capability Maturity Model

Using the CMM

Future Directions of the CMM

25 TEMPLATES

TEST SCENARIO

TEST PLAN

Overall Test Plan

Unit Test Plan

System Test Plan

Integration Test Plan

TEST CASE

Unit Test Case

System Test Case

Integration Test Case

TEST SUMMARY

DEFECT REPORT

TRACEABILITY METRICS

CONCLUSION

REFERENCE

A Practitioner's Guide to Software Testing

Page 14 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

A PRACTITIONER'S GUIDE TO

SOFTWARE TESTING

This document is a condensed written summary or abstract for SQA / Testing.

These notes are suitable for self-study. So anyone may pass them on, verbatim, to anyone else.

Prepared by:

Mahesh D. I [[email protected]] In association with: Prem Kumar N [[email protected]]

A Practitioner's Guide to Software Testing

Page 15 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

1 SOFTWARE DEVELOPMENT LIFE CYCLE:

Phased engineering approach to software development Requirements Analysis & Specification Design – Architecture, System or High Level, Detailed or Low Level Coding / Construction Testing Maintenance

2 TESTING:

Testing is a process of exercising or evaluating a system component by manual or automated means to verify that it satisfies specified requirement.

Testing is a process of executing a program with the intent of finding errors

Establishing confidence that a program does what it is supposed to do

Detecting specification errors and deviations from the specifications

Any activity aimed at evaluating an attribute or capability of a program or system

Testing proves that the product is defective • Errors can be introduced at any stage of software development. • Though many verification activities performed in the previous SDLC phases, no technique is

perfect. These techniques are all static verification methods because no executable code exists. • Testing does not ensure that the product if defect-free.

3 TESTING LIFE CYCLE:

An Effective testing lifecycle must be imposed over the development lifecycle before testing improvements can be made.

The goal is to make the testing flow naturally from the development work.

The testing work should not disrupt or lengthen the development work.

Rough Picture of Testing Life Cycle:

1. Project Initiation Develop broad test strategy.

2. Requirements Establish the testing requirements. Test & Validate the requirements.

3. Design Prepare preliminary test plan. Test & Validate the design.

4. Development Complete the test plan. Integrate and test subsystems. Conduct the system test.

5. Implementation Test changes and fixes and Evaluate testing effectiveness

A Practitioner's Guide to Software Testing

Page 16 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

LIFE CYCLE MODELS:

• Waterfall Model • Rapid Prototyping Cycle • Phased Models

Incremental development Evolutionary development V – Model

WATERFALL MODEL: ≡ Requirement ≡ Design ≡ Code ≡ Design ≡ Test ≡ Integrate ≡ Maintain Waterfall model describes a process of stepwise refinement

Widely used in defense & aerospace industries.

But software is different:

No fabrication step. Program code is just another design level. Hence, no ‘commit’ step - software can always bchanged…!. No body of experience for design analysis (yet) Waterfall model takes a static view of requirements - ignores changing needs. - Lack of user involvement once specification is written

Doesn’t accommodate prototyping, reuse, etc.

PROTOTYPING LIFE CYCLE:

Requirements ► Design Prototype ► Build Prototype ► Test Prototype ▼ Doc. Requirements ► Design ►Code ► Test ►Integrate Prototyping is used for:

- understanding the requirements for the user interface - examining feasibility of a proposed design approach - exploring system performance issues

Problems: 1. Users treat the prototype as the solution 2. A prototype is only a partial specification

A Practitioner's Guide to Software Testing

Page 17 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

PHASED LIFE CYCLE MODELS: Incremental Development: Requirements ► [Release 1] - Design ►Code ► Test ►Integrate ►Q & M ► [Release 2] - Design ►Code ► Test ►Integrate ►Q & M ► [Release 3] - Design ►Code ► Test ►Integrate ►Q & M ► [Release 4] - Design ►Code ► Test ►Integrate ►Q & M

Each Release adds more functionality.

Evolutionary development: ► [Version 1] - Requirements ►Design ►Code ► Test ►Integrate ►Q & M ► [Version 2] - Requirements ►Design ►Code ► Test ►Integrate ►Q & M ► [Version 3] - Requirements ►Design ►Code ► Test ►Integrate ►Q & M Each Version incorporates new requirements. V – Model: System Requirement - - - - - - - - - - - - - - - - - - - - - - System Integration ▼ ▲ Software Requirement - - - - - - - - - - - - - - - - - - Acceptance Test ▼ ▲ Preliminary Design - - - - - - - - - - - - - Software Integration ▼ ▲ Detailed Design - - - - - - - - - - Component Test ▼ ▲ ▲ Code and Debug - - - - - Unit Test ▼ Analysis and Design Test and Integrate

A Practitioner's Guide to Software Testing

Page 18 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

PARALLEL TESTING:

END PHASE TESTING: Requirements Phase Design Phase Coding Unit Test Plan and Test Case Integration Test Plan and Test Case System Test Plan and Test Case

Requirements Phase

System Test Plan & System Test Cases

Design Phase Integration Test Plan & Integration Test Cases

Coding Unit Test Plan & Unit

Test Cases

Design Phase

Construction Phase

A Practitioner's Guide to Software Testing

Page 19 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

WHAT DOES TESTING PROVES?

Testing proves that the product is defective • Errors can be introduced at any stage of software development. • Though many verification activities performed in the previous SDLC phases, no technique is

perfect. These techniques are all static verification methods because no executable code exists. • Testing does not ensure that the product if defect-free.

VERIFICATION:

• Ensuring that the output of a phase meets the requirements / goals se for the phase • Are you developing the “RIGHT” software? • WHAT?

VALIDATION:

• Ensuring that a phase is effective in achieving its goals • Are you developing the product “RIGHT”? • “HOW”?

A Practitioner's Guide to Software Testing

Page 20 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

DEFECTS:

Defect is a variance from the desired product attribute.

• Two kinds of Defects:

(a) Defect from specifications

Products built varies from the product specified (b) Defect in capturing user requirements

Variance is something that user wanted that is not in the built product but was also not specified in the product

• Defect categories:

» Wrong - Incorrect Implementation » Missing - User requirements is not built into the product » Extra - Unwanted requirement built into the product

Failure is a defect that causes an error in the operation of program or that adversely affects the end-user / customer

SOFTWARE RELIABILITY: Software Reliability is the probability that the software will work without failure for a specified period of time in a specified environment. Failure: Failure means the program in its functioning has not met user requirements in some way. Failure can be defined as a program crash requiring interruption of processing and program reload. On the other hand, a very small variation in operation may be considered a failure. Fault / Defect /Bug: A fault (bug) is the defect in the program which when executed, under particular conditions, causes a failure.

Faults can be broadly classified into two: Semantic faults are defects arising from programmer error in communicating the meaning of what is to be done. Syntactic faults result from errors in following the rules of the language in which the program is written. These are

generally discovered by the compiler or assembler Software Testing is the process of executing a program with the intention of finding errors or bugs.

Debugging: Errors (or bugs) which are found must be removed; the activity of diagnosing the precise nature of a

known error and then correcting the error is known as Debugging. Testing and Debugging are often used interchangeably, though they are actually two distinct activities, but since

both of them are carried out as the final step in Software Development, they jointly comprise the Testing Stage.

A Practitioner's Guide to Software Testing

Page 21 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

BUGS SEVERITY

1. Show stopper 2. Critical 3. High 4. Medium 5. Low 1. Show Stopper: It is bug or problem, which prevents a release. 2. Critical: Bug is so critical that need to be fixed before we go with the further testing. 3. High: High severity bugs from system testing must be fixed before we sign off. This is one of a exit criteria. 4. Medium and low severity bugs can be outstanding because this will not damage the functionality of the system.

BUG IMPACTS

Low impact This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to

occur in normal use, or minor errors in layout/formatting. These problems do not impact use of the product in any substantive way.

Medium impact This is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at

certain boundary conditions. c) Has a workaround (where "don't do that" might be an acceptable answer to the user). d) Occurs only at one or two customers. or e) Is very intermittent

High impact This should be used for only serious problems, effecting many sites, with no workaround. Frequent

or reproducible crashes/core dumps/GPFs would fall in this category, as would major functionality not working.

Urgent impact This should be reserved for only the most catastrophic of problems. Data corruption, complete

inability to use the product at almost any site, etc. For released products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved.

BUG LIFE CYCLE

Bug life cycles are similar to software development life cycles. At any time during the software development life

cycle errors can be made during the gathering of requirements, requirements analysis, functional design, internal design, documentation planning, document reparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and phase-out. Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. a bug, and ends when the bug is fixed, and the bug is no longer in existence. What should be done after a bug is found? When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

A Practitioner's Guide to Software Testing

Page 22 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

STATUS

The status field indicates the general health of a bug. Only certain status transitions are allowed.

This bug has recently been added to the database. Nobody has validated that this bug is true. Users who have the "can confirm" permission set may confirm this bug, changing its state to NEW. Or, it may be directly resolved and marked RESOLVED.

1- NEW This bug has recently been added to the assignee's list of bugs and must be processed. Bugs in this state may be accepted, and become ASSIGNED, passed on to someone else, and remain NEW, or resolved and marked RESOLVED.

2- ASSIGNED This bug is not yet resolved, but is assigned to the proper person. From here bugs can be given to another

person and become NEW, or resolved and become RESOLVED. 3- REOPENED This bug was once resolved, but the resolution was deemed incorrect. For example, a WORKSFORME bug is REOPENED when more information shows up and the bug is now reproducible. From here bugs are either marked ASSIGNED or RESOLVED. No resolution yet. All bugs which are in one of these "open" states have the resolution set to blank. All other bugs will be marked with one of the following resolutions. 4- RESOLVED A resolution has been taken, and it is awaiting verification by QA. From here bugs are either re-opened and become REOPENED, are marked VERIFIED, or are closed for good and marked CLOSED. 5- VERIFIED QA has looked at the bug and the resolution and agrees that the appropriate resolution has been taken. Bugs remain in this state until the product they were reported against actually ships, at which point they become CLOSED. 6- CLOSED

The bug is considered dead, the resolution is correct. Any zombie bugs who choose to walk the earth again must do so by becoming REOPENED.

A Practitioner's Guide to Software Testing

Page 23 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

RESOLUTION The resolution field indicates what happened to this bug.

No resolution yet. All bugs which are in one of these "open" states have the resolution set to blank. All other bugs will be marked with one of the following resolutions

1- FIXED

A fix for this bug is checked into the tree and tested. 2- INVALID

The problem described is not a bug 3- WONTFIX The problem described is a bug which will never be fixed. 4- LATER The problem described is a bug which will not be fixed in this version of the product. 5- REMIND The problem described is a bug which will probably not be fixed in this version of the product, but might still be. 6- DUPLICATE The problem is a duplicate of an existing bug. Marking a bug duplicate requires the bug# of the duplicating bug and will at least put that bug number in the description field. 7- WORKSFORME All attempts at reproducing this bug were futile, reading the code produces no clues as to why this behavior would occur. If more information appears later, please re-assign the bug, for now, file it.

SEVERITY

This field describes the impact of a bug. Blocker Blocks development and/or testing work Critical crashes, loss of data, severe memory leak Major loss of function Minor loss of function, or other problem where easy workaround is present Trivial cosmetic problem like misspelled words or misaligned text Enhancement Request for enhancement PRIORITY

This field describes the importance and order in which a bug should be fixed. This field is utilized by the programmers/engineers to prioritized their work to be done. The available priorities are:

P1 Most important P2 P3 P4 P5 Least important

A Practitioner's Guide to Software Testing

Page 24 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

PLATFORM

This is the hardware platform against which the bug was reported. Legal platforms include: All (happens on all platform; cross-platform bug) Macintosh ,PC , Sun ,HP

Note: Selecting the option "All" does not select bugs assigned against all platforms. It merely selects bugs that occur on all platforms. OPERATING SYSTEM This is the operating system against which the bug was reported. Legal operating systems include: All (happens on all operating systems; cross-platform bug) Windows 95 Mac System 8.0 Linux Note that the operating system implies the platform, but not always. For example, Linux can run on PC and Macintosh and others. ASSIGNED TO This is the person in charge of resolving the bug. Every time this field changes, the status changes to NEW to make it easy to see which new bugs have appeared on a person's list. The default status for queries is set to NEW, ASSIGNED and REOPENED. When searching for bugs that have been resolved or verified, remember to set the status field appropriately.

A Practitioner's Guide to Software Testing

Page 25 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

6 TESTING CONCEPTS:

TYPES OF TESTING:

Black Box Testing White Box Testing Grey Box Testing

Black Box Testing: [Functionality Testing or Data-driven Testing]:

A system or component whose inputs, outputs and general functions are known, but whose contents or

implementations are unknown or irrelevant.

Black box testing involves testing a system as a whole, not as a system of connected parts. Realistically, however, black box testing still provides some differentiation between clients, the server, and the network connecting them. This means that when performing black box testing, it is possible to determine whether time is being spent in the client, the network, or the server, and hence determine where bottlenecks lie. The limitation of this approach is that knowing, for example, that the server is the bottleneck of a system is not necessarily sufficient. Is the bottleneck caused by a lack of memory, a lack of CPU performance, poor algorithms, or one of many other causes? To answer these questions, diagnostics need to be performed on the server while generating a load.

White Box Testing: [Structural Testing or Logic-driven Testing]: Source Code is available for Testing. White box testing treats the system as a collection of many parts. During white box testing, many diagnostics will be run on the server(s), the network, and even on clients.This allows causes of bottlenecks to be much more ably identified, and therefore addressed. White box testing requires much greater technical knowledge of the system than does black box testing. To perform white box testing, knowledge is needed as to the components that make up the system, how they interact, the algorithms involved with the components, and the configuration of the components. It must also be known how to perform diagnostics against these components, and even what diagnostics are needed and appropriate. This knowledge can generally only be provided by someone intimate with the system. A problem with white box testing is the effect that running diagnostics has on system performance. Ideally, running diagnostics should have no impact on performance. However, with the exception of the network, this is rarely possible, running diagnostics consumes some system resources. When considering the appropriateness of various diagnostics, one must consider the benefit vs. the impact on performance.

A Practitioner's Guide to Software Testing

Page 26 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Gray Box Testing: Some identified complex modules are white box-tested while complete application is black box-tested.

Why do white box testing when black box testing is used to test conformance to requirements?

• Logic errors and incorrect assumptions most likely to be made when coding for "special cases". Need to ensure these execution paths are tested.

• May find assumptions about execution paths incorrect, and so make design errors. White box testing can find these errors.

• Typographical errors are random. Just as likely to be on an obscure logical path as on a mainstream path.

Advantages and Disadvantages of Black Box Testing Advantages of black box testing: Black box tests are reproducible. The environment the program is running is also tested. The invested effort can be used multiple times.

Disadvantages of black box testing: The results are often overestimated. Not all properties of a software product can be tested. The reason for a failure is not found.

LEVELS OF TESTING:

Unit Testing Integration Testing System Testing Acceptance Testing Regression Testing

1 - Unit Testing:

The testing done to show whether a unit (the smallest piece of software that can be independently complied or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.

2 - Integration Testing:

Integration Testing refers to the testing in which software units of an application are combined and tested for evaluating the interaction between them.

A Practitioner's Guide to Software Testing

Page 27 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

2.1 - Top Down Integration.

Modules integrated by moving down the program design hierarchy. Can use depth first or breadth first top down integration. Steps:

1. Main control module used as the test driver, with stubs for all subordinate modules. 2. Replace stubs either depth first or breadth first. 3. Replace stubs one at a time. 4. Test after each module integrated. 5.Use regression testing (conducting all or some of the previous tests) to ensure new errors are not introduced.

Verifies major control and decision points early in design process. Top level structure tested the most. Depth first implementation allows a complete function to be implemented, tested and demonstrated. Can do depth first implementation of critical functions early. Top down integration forced (to some extent) by some development tools in programs with graphical user interfaces.

2.2 - Bottom Up Integration. Start testing with the bottom of the program. The bottom - up strategy does not exist until the last module is added.

Begin construction and testing with atomic modules (lowest level modules). Use driver program to test. Steps:

1. Low level modules combined in clusters (builds) that perform specific software sub functions.

2. Driver program developed to test. 3. Cluster is tested. 4. Driver programs removed and clusters combined, moving upwards in program structure.

2.3 - Comments on Integration Testing In general, tend to use combination of top down and bottom up testing. Critical modules should be tested and integrated early.

3 - System Testing:

Testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements. Software once validated for meeting functional requirements must be verified for proper interface with other system elements like hardware, databases and people. System testing verifies that all these system elements mesh properly and the software achieves overall function / performance.

3.1 - Recovery Testing Many systems need to be fault tolerant - processing faults must not cause overall system failure. Other systems require recovery after a failure within a specified time. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed.

A Practitioner's Guide to Software Testing

Page 28 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

3.2 - Security Testing Systems with sensitive information or which have the potential to harm individuals can be a target for improper or illegal use. This can include:

• attempted penetration of the system by 'outside' individuals for fun or personal gain. • disgruntled or dishonest employees

During security testing the tester plays the role of the individual trying to penetrate the system. Large range of methods:

• attempt to acquire passwords through external clerical means • use custom software to attack the system • overwhelm the system with requests • cause system errors and attempt to penetrate the system during recovery • browse through insecure data.

Given time and resources, the security of most (all?) systems can be breached. 3.3 - Stress Testing

Stress testing is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs. For example:

• Higher rates of interrupts • Data rates an order of magnitude above 'normal' • Test cases that require maximum memory or other resources. • Test cases that cause 'thrashing' in a virtual operating system. • Test cases that cause excessive 'hunting' for data on disk systems.

Can also attempt sensitivity testing to determine if particular combinations of otherwise normal inputs can cause improper processing. 3.4 - Performance Testing

For real-time and embedded systems, functional requirements may be satisfied but performance problems make the system unacceptable. Performance testing checks the run-time performance in the context of the integrated system.

Can be coupled with stress testing. May require special software instrumentation.

4 - Regression Testing:

Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements

5 - Acceptance Testing:

Is the process of comparing a program to its requirements? Client side testing is called acceptance testing.

5.1 - Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

5.2 - Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

A Practitioner's Guide to Software Testing

Page 29 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

TEST STRATEGY:

A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used.

1. Test Plans 2. Test Cases

• Should be developed for each project • Defines the scope and general direction for testing in the project • Must answer the following:

- when will testing occur? - what kind of testing will occur? - what are the risks?

• what are the critical success factors? • what are the testing objectives? • what are the trade-offs? • who will conduct the testing? • what tools will be used? • how much testing will be done? • Exit criteria: Review and approval of test strategy document • Inputs: Project proposal/User requirements/Project plan • Outputs: Test strategy Documents • Procedure: - Identify the team to prepare the test strategy document. Team should comprise the project members or members of the independent testing team. - Identify the team to review the test strategy document. Recommended review team is PM, QA representative and user representative.

- Perform risk assessment

- Identify critical success factor

TEST STRATEGY FORMAT

• Name of the project • Brief description • Type of project • Type of software • Critical success factors • Risk factors • Test objectives • Trade-offs

A Practitioner's Guide to Software Testing

Page 30 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

TEST PLAN A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.

System Test Plan

Describing the plan for system integration testing. This would be prepared using the Software Requirements Specification document.

Integration Test Plan

Describing the plan for integration of tested software components.

Unit Test Plan

Describing the plan for integration of tested software components. TEST CASE

A document that specifies the test inputs, execution conditions, and predicted results for an item to be tested.

System Test Cases

Specifying test cases for the system and testing.

Integration Test Cases

Specifying the test cases for each stage of integration of tested software components.

Unit Test Cases

Specifying the test cases for testing of individual units of software.

• Objective: To prepare for subsequent planning and execution by first setting out the goals with respect to the testing effort

• Responsibility: Project manager

• Entry criteria: as soon as the project commences should be done along with the project planning Types of Test Case:

Functional Specifications Structural Code Structure Data Data Structure Randomized Random Generator Extracted Existing files or test cases Extreme Limits and boundary conditions.

A Practitioner's Guide to Software Testing

Page 31 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

TEST METRICS:

Metrics are defined as “standards of measurement” and have long been used in the IT industry to indicate a method of gauging the effectiveness and efficiency of a particular activity within a project. Test metrics exist in a variety of forms. The question is not whether we should use them, but rather, which ones should we use. Simpler is almost always better. Although it may be interesting to derive the Binomial Probability Mass Function for a particular project, is it practical in terms of the resources used and the time it will take to capture the necessary data? Will the resulting value be meaningful to the effort of process improvement? Most often, the answer is no.

Although Test Metrics are gathered during the test effort, they can provide measurements of many different activities performed throughout a project. In conjunction with Root Cause Analysis, Test Metrics can be used to quantitatively track issues from points of occurrence throughout the development process. Finally, when Test Metrics information is accumulated, updated and reported on a consistent and regular basis it ensures that trends can be promptly captured and evaluated.

Types of Metrics

Base Metrics Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics

are used to provide project status reports to the Test Lead and Project Manager, and also feed into the formulas used to derive Calculated Metrics. Every project should track the following Test Metrics:

# of Test Cases # of First Run Failures # of Test Cases Executed/Unexecuted Total Executions # of Test Cases Passed Total Passes # of Test Cases Failed Total Failures # of Test Cases Under Investigation Test Case Execution Time # of Test Cases Blocked Test Execution Time # of Test Cases Re-executed

There are other Base Metrics that can and/or should be tracked. This list is sufficient for most Test Teams that are starting a metrics program.

Calculated Metrics Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are

generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project). The following Calculated Metrics are recommended for implementation in all test efforts:

% Complete % Defects Corrected % Test Coverage % Rework % Test Cases Passed % Test Effectiveness % Test Cases Blocked % Test Efficiency % 1st Run Failures Defect Discovery Rate % Failures Defect Removal Cost

A Practitioner's Guide to Software Testing

Page 32 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Testing Process

Testing is a process of evaluating a project / product by manual or automated means to verify that it meets the specified requirements. This process is used to test the project / product using the Test Plan & Test Cases and will find the defects, which are not conforming to the requirements. This Process would help the testing team in how to go about testing.

Input:

Entry Criteria

Validation Criteria

Activity

Level of Testing

Preparation of Test Plan

Preparation of Test Case

Testing

System Testing Product scope document and Requirement specifications document

System test Plan System Test Plan , test cases and Integration tested product

Integration Testing

Product Architecture document and Design document

Integration Plan Integration Test plan and Test cases and Unit tested product

Unit Testing Detailed design Unit test Plan Unit test plan and Test cases and code

Activity

Level of Testing

Preparation of Test Plan Preparation of Test Case

Testing

System Testing Approved Product scope document and Approved Requirement specifications document

Approved System test Plan

Approved System Test Plan , test cases and Completion of Integration testing

Integration Testing Approved Product Architecture document and Design document

Approved Integration Plan

Approved Integration Test plan and Test cases and completion of Unit testing

Unit Testing Approved Detailed design Approved Unit test Plan

Approved Unit test plan and Test cases and reviewed code

Preparation of Test Plans Preparation of test cases Testing

Technical review of Test plans Technical review of Test cases

Technical review of Test results

A Practitioner's Guide to Software Testing

Page 33 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Procedure 1:

• Preparation of Overall Test Plan • Preparation of Unit Test Plan • Preparation Of Unit Test cases • Execution of Unit Test cases • Generation of Defect report • Preparation of Integration Test Plan

Procedure 2:

• Preparation of Integration test cases • Execution of Integration test cases • Generation of Defect report • Preparation of System test plan • Preparation of System test cases • Execution of System test cases • Generation of Defect report

Exit Criteria:

• Approved Test plans • Approved Test cases • Approved Defect report • QC Certification

Quality Records:

• Technical review Report of Test plans • Technical review report of Test cases • Technical review report of Defect report

Deliverables:

• UTP, UTC • ITP, ITC • STP, STC • Defect Report

A Practitioner's Guide to Software Testing

Page 34 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Inputs and Outputs for the Test Process

Activity Inputs Outputs

Requirements analysis Requirements definition, requirements specification

Requirements traceability matrix

Test planning Requirements specification, requirements trace matrix

Test plan—test strategy, test system, effort estimate and schedule

Test design Requirements specification, requirements trace matrix, test plan

Test designs—test objectives, test input specification, test configurations

Test implementation Software functional specification, requirements trace matrix, test plan, test designs

Test cases—test procedures and automated tests

Test debugging "Early look" build of code, test cases, working test system

Final test cases

System testing System test plan, requirements trace matrix, "test-ready" code build, final test cases, working

test system

Test results—bug reports, test status reports, test results summary report

Acceptance testing Acceptance test plan, requirements trace matrix, beta code build, acceptance test cases,

working test system

Test results

Operations and maintenance

Repaired code, test cases to verify bugs, regression test cases, working test system

Verified bug fixes

Test Coverage

Test coverage means “what is tested.” The following test coverage is required under this procedure: - Test all the primary functions that can reasonably be tested in the time

available. Make sure the Test Manager is aware of any primary functions that you don’t have the time or the ability to test.

- Test a sample of interesting contributing functions. You’ll probably touch many contributing functions while exploring and testing primary functions. - Test selected areas of potential instability. As a general rule, choose five to ten areas of the product (an area could be a function or a set of functions) and test with data that seems likely to cause each area to become unstable.

The Test Manager will decide how much time is available for the General Functionality and Stability Test. You have to fit all of your test coverage and reporting into that time slot. As a general rule, you should spend 80% of your time focusing on primary functions, 10% on contributing, and 10% on areas of instability.

Products that interact extensively with the operating system will be tested more intensively than other

products. More time will be made available for testing in these cases.

A Practitioner's Guide to Software Testing

Page 35 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

MANUAL TESTING

GUI - SQA team members upon receipt of the Development builds, walk through the GUI and either update existing hard copy of the product Roadmaps, or create new hard copy. This is then passed on to the Tools engineer to automate for new builds and regression testing. Defects are entered into the bugs tracking database, for investigation and resolution. Questions about GUI content are communicated to the Development team for clarification and resolution. The team works to arrive at a GUI appearance and function which is "customer oriented" and appropriate for the platform, Web, UNIX, Windows, Macintosh. Automated GUI regression tests are run against the product at Alpha and Beta "Hand off to QA" ,(HQA) to validate that the GUI remains consistent throughout the development process. During the Alpha and Beta periods, selected customers validate the customer orientation of the GUI.

Features & Functions - SQA test engineers, relying on the team definition, exercise the product features and functions accordingly. Defects in feature/function capability are entered into the defect tracking system and are communicated to the team. Features are expected to perform as expected and their functionality should be oriented toward ease of use and clarity of objective. Tests are planned around new features and regression tests are exercised to validate existing features and functions are enabled and performing in a manner consistent with prior releases. SQA using the exploratory testing method, manually tests and then plans more exhaustive testing and automation. Regression tests are exercised which consist of using developed test cases against the product to validate field input, boundary conditions and so on... Automated tests developed for prior releases are also used for regression testing.

Installation - Product is installed on each of the supported operating systems in either default, flat file

configuration, or with one of the supported databases. Every operating system and database, supported by the product, are tested, though not in all possible combinations. SQA is committed to executing, during the development life cycle, the combinations most frequently used by the customers. Clean and upgrade installations are the minimum requirements.

Documentation - All documentation, which is reviewed by Development prior to Alpha is reviewed by the SQA

team prior to Beta. On-line help and context sensitive Help are considered documentation as well as manuals, HTML documentation and Release Notes. SQA not only verifies technical accuracy, clarity and completeness, they also provide editorial input on consistency, style and typographical errors. AUTOMATED TESTING

GUI - Automated GUI tests are run against the product at Alpha and Beta "Hand off to QA" (HQA) to validate that the GUI has remained consistent within the product throughout the development process. The automated Roadmaps, walk through the client tool windows and functions, validating that each is there and that it functions.

Data Driven - Data driven scripts developed using the automation tools and auto driver scripts are exercised for

both UNIX and Windows platforms to provide repeatable, verifiable actions and results of core functions of the product. Currently these are a subset of all functionality. These are used to validate new builds prior to extensive manual testing, thus assuring both Development and SQA of the robustness of the code.

Future - Utilization of automated tools will increase as our QA product groups become more proficient at the creation of automated tests. Complete functionality testing is a goal, which will be implemented feature by feature.

A Practitioner's Guide to Software Testing

Page 36 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

SOFTWARE TESTING TYPES:

Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

Alpha Testing: Early testing of a software product conducted by selected customers.

Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across deferent system platforms and environments.

Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.

Automated Testing:

• Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.

• The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.

Basic Block: A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.

Basis Set: The set of tests derived using basis path testing.

Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.

Beta Testing: Testing of a re-release of a software product conducted by customers.

Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

A Practitioner's Guide to Software Testing

Page 37 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.

Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

Branch Testing: Testing in which all branches in the program source code are tested at least once.

Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

CAST: Computer Aided Software Testing.

Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Coding: The generation of source code.

Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

A Practitioner's Guide to Software Testing

Page 38 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Component: A minimal software item for which a separate specification is available.

Component Testing: See Unit Testing.

Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.

Data Dictionary: A database that contains definitions of all data items defined during analysis.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

Debugging: The process of finding and removing the causes of software failures.

Defect: Nonconformance to requirements or functional / program specification

Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it. See also Static Testing.

Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Class: A portion of a component's input or output domains for which the component's behaviors is assumed to be the same from the component's specification.

A Practitioner's Guide to Software Testing

Page 39 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.

Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.

Functional Testing: See also Black Box Testing.

• Testing the features and operational behavior of a product to ensure they correspond to its specifications. • Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated

in response to selected inputs and execution conditions.

Glass Box Testing: A synonym for White Box Testing.

Gorilla Testing: Testing one particular module, functionality heavily.

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

High Order Tests: Black-box tests conducted once the software has been integrated.

Independent Test Group (ITG): A group of people whose primary responsibility is software testing,

Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Load Testing: See Performance Testing.

Localization Testing: This term refers to making software specifically designed for a specific locality.

Loop Testing: A white box testing technique that exercises program loops.

Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

A Practitioner's Guide to Software Testing

Page 40 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.

Path Testing: Testing in which all paths in the program source code are tested at least once.

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.

Quality Management: That aspect of the overall management function that determines and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

Ramp Testing: Continuously raising an input signal until the system breaks down.

Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

A Practitioner's Guide to Software Testing

Page 41 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

Software Testing: A set of activities conducted with the intent of finding errors in software.

Static Analysis: Analysis of a program carried out without executing the program.

Static Analyzer: A tool that carries out static analysis.

Static Testing: Analysis of a program carried out without executing the program.

Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Testing:

A Practitioner's Guide to Software Testing

Page 42 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

• The process of exercising software to verify that it satisfies specified requirements and to detect errors. • The process of analyzing a software item to detect the differences between existing and required conditions (that is,

bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). • The process of operating a system or component under specified conditions, observing or recording the results, and

making an evaluation of some aspect of the system or component.

Test Automation: See Automated Testing.

Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case:

• Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.

• A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.

Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.

Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Procedure: A document providing detailed instructions for the execution of one or more test cases.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

A Practitioner's Guide to Software Testing

Page 43 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.

Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.

Usability Testing: Testing the ease with which users can learn and use a product.

Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.

Unit Testing: Testing of individual software components.

Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.

Verification: The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.

Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

A Practitioner's Guide to Software Testing

Page 44 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

A Practitioner's Guide to Software Testing

Page 45 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Mandatory Testing Types and there purpose with methodology FUNCTIONALITY TESTING

It's great that your product is new and cool, but does it actually work? From basic commands to clean audio and video, from graphics to grammar, everything must function perfectly. Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product's user interface, database management, security, installation, networking, etc. Perhaps your development team tells you that the product is stable - that it functions perfectly and according to specifications. They tell you they are confident because they tested it themselves. Do not be lulled into a false sense of security. Whereas your development team has the best of intentions, history has demonstrated that many dev teams are just too close to the product to provide objective and effective quality assurance. You owe it to yourself to hire a competent, reliable and independent quality assurance firm to lend a fresh pair of eyes to the project.

Purpose The purpose of functionality testing is to reveal issues concerning the product’s functionality and conformance to stated/documented behavior.

Methodology The first step in evaluating a program’s functionality is to become familiar with the program

itself, and with the program’s desired behavior. Ideally, this process is aided by documentation such as the program’s functional specification or user manual. Even without such documentation, expected behavior can often be adequately modeled based on industry standard and tester intuition. Once a program’s expected functionality has been defined, test cases or test procedures can be created that exercise the program in order to test actual behavior against expected behavior. Testing the program’s functionality then involves the execution of any test cases that have been created, as well as subjecting the program to a certain amount of ad hoc testing - this is testing that is not rigorously structured, but instead attempts to address areas that the tester, using his/her experience, feels are high risk areas, and usually involves exercising the program in an unconventional manner. Certain portions of a functionality testing effort can also be automated, but this depends on several factors, and should be discussed with a qualified engineer. COMPATIBILITY

Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware

platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.

Purpose The purpose of compatibility testing is to reveal issues related to the product’s interaction with

other software (operating systems, browsers, installed applications…) as well as hardware (video cards, sound cards, processors…). The product compatibility is evaluated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix. Finally, the script is executed against the matrix, and any anomalies are investigated to determine exactly where the incompatibility lies.

A Practitioner's Guide to Software Testing

Page 46 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

WEBSITE TESTING

Bad news spreads like wildfire on the Internet. All you need is one or two frustrated web surfers spreading the word that your site doesn't work and sooner than you can say "click here", thousands of potential hits will turn into misses. Your web site is your “window to the world”. It’s a storefront that tells everyone why you exist. Whether you are developing your own web site, or are a third party developer creating a site for someone else, you can do yourself a great favor by hiring a professional and experienced quality assurance firm to run the site through its paces. The web site should be fully tested - from complex data manipulation functionality to simple link verification. The web provides a million ways to gain customers, and just as many ways to lose them. If your product exists on the web, it needs to be run through every test possible to measure its ability to further your success. The deployment WWW technology in sophisticated software solutions has created a major need for rapid, effective QA testing solutions. Testing n-tier, high hit-rate E-commerce sites is very important, as are:

• XML • Java • CORBA • WAP

PERFORMANCE TESTING

Your website's performance is only as reliable as the servers behind it. A site that goes down due to too

much traffic will quickly become a site that gets no traffic at all.

Performance testing can be applied to understand your application or WWW site's scalability, or to benchmark the performance in your environment of third party products such as servers and middleware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high hit-rate Web sites. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.

Reliability is the key to customer loyalty and your competition knows this. That is why it is imperative that your servers be tested for stability, scalability and performance using the latest workload simulations. Downtime is costly and if your site can’t handle the traffic, your customers will click to one that can.

Use the latest in sophisticated performance testing tools to simulate tens of thousands of virtual users hitting your servers at once. Use the resulting data to evaluate the performance of your site under normal load (performance testing); under anticipated future load (scalability testing); under highly abnormal peak load (stress testing); and under uninterrupted, sustained load (endurance testing). As load testing proceeds, you will receive regular reports detailing how your servers are responding to the imposed load.

Purpose The purpose of stress testing is to reveal issues related to the product’s performance under extreme/non-normal operating environments (low system resources, heavy load…), and also to quantify the stress level at which a system’s response significantly degrades.

Methodology Using sophisticated software tools, an application is tested by increasing the load on the application to any desired level. During the load, various performance measurements of the system under test are captured and documented. These data are then analyzed to determine the overall health of the system, and to identify system bottlenecks. They can also be used to identify the load at which the system’s performance begins to significantly degrade. Make no mistake. Knowing, not predicting how your servers are going to respond to the public’s demand is imperative if you are going to be successful.

A Practitioner's Guide to Software Testing

Page 47 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

AUTOMATED TESTING

Allowing software to test software -- just another weapon in our arsenal. Automated testing is a valuable asset that provides us with additional ways of getting to the heart of the problem. Imagine if you could write a program that simulates thousands of executed commands in the same exact order. Each time that your software did not perform to your specifications, the program would record the exact command that caused the anomaly. Then, once you thought you fixed the problem, you could then run the very same set of commands to see if, in fact, you were successful.

This is automated testing.

Automated testing can supplement the manual testing process and provide valuable insight in a speedy, objective way. Use of these powerful sophisticated automated testing tools means that automated test scripts could be created that mimic the very same user interface over and over, thereby limiting the amount of “live” testing necessary. Quite often, for testing efforts that are either very extensive, or consist of several testing cycles, it is worthwhile to consider automating a portion of the testing. Though the testing of some aspects of a program cannot be automated, and some should not be automated, automation can significantly reduce the manpower and cost required to perform some of the testing. If automation might be beneficial to your project, include automation in the Master Test Plan and Proposal. Automation techniques could be applied to various areas of a testing effort including installation testing, performance testing, functionality testing, compatibility testing, and whether automation should be considered or not could be evaluated. Automated testing is an extremely useful discipline that aids in the overall testing effort. At the same time, we believe that there is still no substitute (yet, anyway) for good, old-fashioned human interaction. After all, your software was not meant to be used by computers, right? REGRESSION TESTING

Sometimes one step forward turns into two steps back. We know that in the world of software development,

solutions originally intended to fix problems can create new and potentially more serious problems.

Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing. Enter regression testing. This discipline enables us to track issues, check the effectiveness of a solution, and detect any new issues which may have been created as a result of fixing the original problem. Reports are generated, problems are tracked, and the process continues until all of the issues are solved or a new version is developed.

Key to the careful tracking of issues is Beta Bugs. The online bug tracking software is available to you and your staff 24 hours a day, seven days a week. It is absolutely secure and very easy to use. With Beta Bugs, all people involved in the quality assurance process will have access to the latest outstanding issues thereby gauging progress every step of the way. As the regression phase of the QA cycle continues, regular updates to Beta Bugs help coordinate the efforts of both the development and QA teams.

Purpose The purpose of regression testing is to ensure that previously detected and fixed issues really are

fixed, they do not reappear, and new issues are not introduced into the program as a result of the changes made to fix the issues.

A Practitioner's Guide to Software Testing

Page 48 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Methodology Typically regression testing should be performed on a daily basis. Once an issue in the defect

tracking database has been fixed it is reassigned back for final resolution. You can now either reopen the issue, if it has not been satisfactorily addressed, or close the issue if it has, indeed, been fixed. For more involved projects lasting several months, several full regression passes may be scheduled in addition to the continuous regression testing mentioned above. Full regression passes involve re-verifying all closed issues in the defect tracking database as truly closed. A full regression pass is also typically performed at the very end of the testing effort as part of a final acceptance test. In addition to verifying closed issues, regression testing seeks to verify that changes made to fix known defects do not cause further defects. You can now produce a regression testing suite consisting of test cases that evaluate the stability of all modules of the software product. Quite often, automation of this regression testing suite is well worth considering.

TESTING THE MULTI-PLATFORM SYSTEM

Multi-platform (e.g.: client-server or web) testing has some special requirements, but for the most part, the techniques, methods, risks and results are exactly the same. Now that you have at least two separate sets of platforms, each with their own development tools, you need to ensure at some point that the results of the development process mesh well together. Black-box testing, because of its independence is most useful here.

Some of this is addressed by setting up integration specs at the beginning that identify the databases, messages, access tools and network requirements that will need to be setup (and tested, of course). Be sure to specify performance requirements as well.

In late integration test, the pieces come together. It works out well if all previously tested code is in the same pipeline. For example, if you are developing AS/400 server portion and a PC client portion, manage both in one management system that is responsible for managing all of the requests, specifications, code, and documentation. When integration testing begins, all of the required components will enter the test environment simultaneously. The tester will be assured by the management system’s control of the process that the previous steps had assembled the requisite pieces of the system before promoting them to multi-platform integration test. If not, it will be clear who was responsible. Accountability is important to a smoothly running process.

A Practitioner's Guide to Software Testing

Page 49 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

WINDOWS COMPLIANCE TESTING For Each Application

Start Application by Double Clicking on its ICON The Loading message should show the application name, version number, and a bigger pictorial representation of the icon.

No Login is necessary

The main window of the application should have the same caption as the caption of the icon in Program Manager.

Closing the application should result in an "Are you Sure" message box

Attempt to start application Twice This should not be allowed - you should be returned to main Window

Try to start the application twice as it is loading.

On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass (e.g. alpha access enquiries) then some enquiry in progress message should be displayed.

All screens should have a Help button, F1 should work doing the same.

For Each Window in the Application

If Window has a Minimize Button, click it.

Window should return to an icon on the bottom of the screen This icon should correspond to the Original Icon under Program Manager.

Double Click the Icon to return the Window to its original size.

The window caption for every application should have the name of the application and the window name - especially the error messages. These should be checked for spelling, English and clarity, especially on the top of the screen. Check does the title of the window makes sense.

If the screen has a Control menu, then use all ungreyed options. (See below)

Check all text on window for Spelling/Tense and Grammar

Use TAB to move focus around the Window. Use SHIFT+TAB to move focus backwards.

A Practitioner's Guide to Software Testing

Page 50 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Tab order should be left to right, and Up to Down within a group box on the screen. All controls should get focus - indicated by dotted box, or cursor. Tabbing to an entry field with text in it should highlight the entire text in the field.

The text in the Micro Help line should change - Check for spelling, clarity and non-updateable etc.

If a field is disabled (grayed) then it should not get focus. It should not be possible to select them with either the mouse or by using TAB. Try this for every grayed control.

Never updateable fields should be displayed with black text on a grey background with a black label.

All text should be left-justified, followed by a colon tight to it. In a field that may or may not be updateable, the label text and contents changes from black to grey depending on the current status.

List boxes are always white background with black text whether they are disabled or not. All others are grey.

In general, do not use go to screens, use go sub, i.e. if a button causes another screen to be displayed, the screen should not hide the first screen, with the exception of tab in 2.0

When returning return to the first screen cleanly i.e. no other screens/applications should appear.

In general, double-clicking is not essential. In general, everything can be done using both the mouse and the keyboard.

All tab buttons should have a distinct letter. Text Boxes

Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to Insert Bar. If it doesn't then the text in the box should be grey or non-updateable. Refer to previous page.

Enter text into Box

Try to overflow the text by typing to many characters - should be stopped Check the field width with capitals W.

Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields.

SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double Click should select all text in box. Option (Radio Buttons)

Left and Right arrows should move 'ON' Selection. So should Up and Down.. Select with mouse by clicking.

Check Boxes

A Practitioner's Guide to Software Testing

Page 51 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do the same Command Buttons

If Command Button leads to another Screen, and if the user can enter or change details on the other screen then the Text on the button should be followed by three dots.

All Buttons except for OK and Cancel should have a letter Access to them. This is indicated by a letter underlined in the button text. The button should be activated by pressing ALT+Letter. Make sure there is no duplication.

Click each button once with the mouse - This should activate Tab to each button - Press SPACE - This should activate Tab to each button - Press RETURN - This should activate The above are VERY IMPORTANT, and should be done for EVERY command Button.

Tab to another type of control (not a command button). One button on the screen should be default (indicated by a thick black border). Pressing Return in ANY no command button control should activate it.

If there is a Cancel Button on the screen , then pressing <Esc> should activate it.

If pressing the Command button results in uncorrectable data e.g. closing an action step, there should be a message phrased positively with Yes/No answers where Yes results in the completion of the action. Drop Down List Boxes

Pressing the Arrow should give list of options. This List may be scrollable. You should not be able to type text in the box.

Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing ‘Ctrl - F4’ should open/drop down the list box.

Spacing should be compatible with the existing windows spacing (word etc.). Items should be in alphabetical order with the exception of blank/none which is at the top or the bottom of the list box.

Drop down with the item selected should be display the list with the selected item on the top. Make sure only one space appears, shouldn't have a blank line at the bottom. Combo Boxes

Should allow text to be entered. Clicking Arrow should allow user to choose from list

A Practitioner's Guide to Software Testing

Page 52 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

List Boxes

Should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow keys.

Pressing a letter should take you to the first item in the list starting with that letter.

If there is a 'View' or 'Open' button beside the list box then double clicking on a line in the List Box, should act in the same way as selecting and item in the list box, then clicking the command button.

Force the scroll bar to appear, make sure all the data can be seen in the box.

Screen Validation Checklist

AESTHETIC CONDITIONS:

1. Is the general screen background the correct colour?. 2. Are the field prompts the correct colour? 3. Are the field backgrounds the correct colour? 4. In read-only mode, are the field prompts the correct colour? 5. In read-only mode, are the field backgrounds the correct colour? 6. Are all the screen prompts specified in the correct screen font? 7. Is the text in all fields specified in the correct screen font? 8. Are all the field prompts aligned perfectly on the screen? 9. Are all the field edit boxes aligned perfectly on the screen? 10. Are all group boxes aligned correctly on the screen? 11. Should the screen be resizable? 12. Should the screen be minim sable? 13. Are all the field prompts spelt correctly? 14. Are all character or alpha-numeric fields left justified? This is the default unless otherwise specified. 15. Are all numeric fields right justified? This is the default unless otherwise specified. 16. Is all the micro help text spelt correctly on this screen? 17. Is all the error message text spelt correctly on this screen? 18. Is all user input captured in UPPER case or lower case consistently? 19. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact. 20. Assure that all windows have a consistent look and feel. 21. Assure that all dialog boxes have a consistent look and feel.

A Practitioner's Guide to Software Testing

Page 53 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

VALIDATION CONDITIONS:

1. Does a failure of validation on every field cause a sensible user error message? 2. Is the user required to fix entries which have failed validation tests? 3. Have any fields got multiple validation rules and if so are all rules being applied? 4. If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message.? 5. Is validation consistently applied at screen level unless specifically required at field level? 6. For all numeric fields check whether negative numbers can and should be able to be entered. 7. For all numeric fields check the minimum and maximum values and also some mid-range values allowable? 8. For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size? 9. Do all mandatory fields require user input? 10. If any of the database columns don’t allow null values then the corresponding screen fields must be mandatory. (If any field which initially was mandatory has become optional then check whether null values are allowed in this field.)

NAVIGATION CONDITIONS:

1. Can the screen be accessed correctly from the menu? 2. Can the screen be accessed correctly from the toolbar? 3. Can the screen be accessed correctly by double clicking on a list control on the previous screen? 4. Can all screens accessible via buttons on this screen be accessed correctly? 5. Can all screens accessible by double clicking on a list control be accessed correctly? 6. Is the screen modal. i.e. Is the user prevented from accessing other functions when this screen is active and is this correct? 7. Can a number of instances of this screen be opened at the same time and is this correct?

USABILITY CONDITIONS:

1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified. 2. Is all date entry required in the correct format? 3. Have all pushbuttons on the screen been given appropriate Shortcut keys? 4. Do the Shortcut keys work correctly? 5. Have the menu options which apply to your screen got fast keys associated and should they have? 6. Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified. 7. Are all read-only fields avoided in the TAB sequence? 8. Are all disabled fields avoided in the TAB sequence? 9. Can the cursor be placed in the micro help text box by clicking on the text box with the mouse? 10. Can the cursor be placed in read-only fields by clicking in the field with the mouse? 11. Is the cursor positioned in the first input field or control when the screen is opened? 12. Is there a default button specified on the screen? 13. Does the default button work correctly?

A Practitioner's Guide to Software Testing

Page 54 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

DATA INTEGRITY CONDITIONS: 1. Is the data saved when the window is closed by double clicking on the close box? 2. Check the maximum field lengths to ensure that there are no truncated characters? 3. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact. 4. Check maximum and minimum field values for numeric fields? 5. If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers? 6. If a set of radio buttons represent a fixed set of values such as A, B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions which are not screen based and thus the required initial values can be incorrect.) 7. If a particular set of data is saved to the database check that each value gets saved fully to the database. i.e. Beware of truncation (of strings) and rounding of numeric values.

MODES (EDITABLE READ-ONLY) CONDITIONS:

1. Are the screen and field colour adjusted correctly for read-only mode? 2. Should a read-only mode be provided for this screen? 3. Are all fields and controls disabled in read-only mode? 4. Can the screen be accessed from the previous screen/menu/toolbar in read only mode? 5. Can all screens available from this screen be accessed in read-only mode? 6. Check that no validation is performed in read-only mode.

GENERAL CONDITIONS:

1. Assure the existence of the "Help" menu. 2. Assure that the proper commands and options are in each menu. 3. Assure that all buttons on all tool bars have a corresponding key commands. 4. Assure that each menu command has an alternative(hot-key) key sequence which will invoke it where appropriate. 5. In drop down list boxes, ensure that the names are not abbreviations / cut short 6. In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations. 7. Ensure that duplicate hot keys do not exist on each screen 8. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message “Changes will be lost – Continue yes/no” 9. Assure that the cancel button functions the same as the escape key. 10. Assure that the Cancel button operates as a Close button when changes have be made that cannot be undone. 11. Assure that only command buttons which are used by a particular window, or in a particular dialog box, are present. - i.e make sure they don’t work on the screen behind the current screen. 12. When a command button is used sometimes and not at other times, assure that it is grayed out when it should not be used. 13. Assure that OK and Cancel buttons are grouped separately from other command buttons. 14. Assure that command button names are not abbreviations. 15. Assure that all field labels/names are not technical labels, but rather are names meaningful to system users. 16. Assure that command buttons are all of similar size and shape, and same font & font size.

A Practitioner's Guide to Software Testing

Page 55 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

17. Assure that each command button can be accessed via a hot key combination. 18. Assure that command buttons in the same window/dialog box do not have duplicate hot keys. 19. Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed - and NOT the Cancel or Close button 20. Assure that focus is set to an object/button which makes sense according to the function of the window/dialog box. 21. Assure that all option buttons (and radio buttons) names are not abbreviations. 22. Assure that option button names are not technical labels, but rather are names meaningful to system users. 23. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box. 24. Assure that option box names are not abbreviations. 25. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas “Group Box” 26. Assure that the Tab key sequence which traverses the screens does so in a logical way. 27. Assure consistency of mouse actions across windows. 28. Assure that the color red is not used to highlight active objects (many individuals are red-green color blind). 29. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics). 30. Assure that the screen/window does not have a cluttered appearance 31. Ctrl + F6 opens next tab within tabbed window 32. Shift + Ctrl + F6 opens previous tab within tabbed window 33. Tabbing will open next tab within tabbed window if on last field of current tab 34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window 35. Tabbing will go onto the next editable field in the window 36. Banner style & size & display exact same as existing windows 37. If 8 or less options in a list box, display all options on open of list box - should be no need to scroll 38. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. (i.e. the tab is opened, highlighting the field with the error on it) 39. Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs. 40. On open of tab focus will be on first editable field 41. All fonts to be the same 42. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate), generating "changes will be lost" message if necessary. 43. Micro help text for every enabled field & button 44. Ensure all fields are disabled in read-only mode 45. Progress messages on load of tabbed screens 46. Return operates continue 47. If retrieve on load of tabbed window fails window should not open

A Practitioner's Guide to Software Testing

Page 56 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

SPECIFIC FIELD TESTS Date Field Checks

Assure that leap years are validated correctly & do not cause errors/miscalculations

Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations

Assure that 00 and 13 are reported as errors

Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations

Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/miscalculations

Assure that Feb. 30 is reported as an error

Assure that century change is validated correctly & does not cause errors/miscalculations

Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations

Numeric Fields

Assure that lowest and highest values are handled correctly

Assure that invalid values are logged and reported

Assure that valid values are handles by the correct procedure

Assure that numeric fields with a blank in position 1 are processed or reported as an error

Assure that fields with a blank in the last position are processed or reported as an error an error

Assure that both + and - values are correctly processed

Assure that division by zero does not occur

Include value zero in all calculations

Include at least one in-range value

Include maximum and minimum range values

Include out of range values above the maximum and below the minimum

Assure that upper and lower values in ranges are handled correctly

Alpha Field Checks

Use blank and non-blank data

Include lowest and highest values

Include invalid characters & symbols

Include valid characters

Include data items with first position blank

Include data items with last position blank

A Practitioner's Guide to Software Testing

Page 57 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

VALIDATION TESTING - STANDARD ACTIONS On every Screen Add View Change Delete

Continue Add View Change Delete

Cancel Fill each field - Valid data Fill each field - Invalid data Different Check Box combinations Scroll Lists Help Fill Lists and Scroll Tab Tab Order Shift Tab Shortcut keys - Alt + F

A Practitioner's Guide to Software Testing

Page 58 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

CLIENT/SERVER PERFORMANCE TESTING PROCESS

Unlike host-based systems, it is usually not possible to model (and predict) the performance of a C/S system because of its increased complexity. Usually, some simple, informal tests of an untried architecture are performed during system development to give some indication of the actual performance under real loads. Such informal tests may give some confidence, but are unreliable when it comes to predicting response times under production loads.

Performance testing using a simulated load (sized in accordance with the users' business volume estimates) with response time measurements compared with agreed users requirements is the only practical method of predicting whether a system will perform acceptably.

Although it is possible for performance tests to be conducted with testers executing manual test scripts, this paper is concerned with performance tests which use automated test running tools. Automated test running tools make use of test scripts which define the actions required to simulate a user's activity on a client application or messages sent by a client across the network to servers. Most proprietary test running tools have their own script language which are, in many ways, like programming languages.

Performance Testing Objectives

The objectives of a performance test are to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. More formally, we can define the primary objective as:

"To demonstrate that the system functions to specification with acceptable response times while processing the required transaction volumes on a production sized database."

The main deliverables from such a test, prior to execution, are automated test scripts and an infrastructure to be used to execute automated tests for extended periods. This infrastructure is an asset, and an expensive one too, so it pays to make as much use of this infrastructure as possible.

Fortunately, the test infrastructure is a test bed which can be used for other tests with broader objectives which we can summaries as:

• Assessing the system's capacity for growth - the load and response data gained from the tests can be used to validate the capacity planning model and assist decision making.

• Identifying weak points in the architecture - the controlled load can be increased to extreme levels to stress the architecture and break it - bottlenecks and weak components can be fixed or replaced.

• Detect obscure bugs in software - tests executed for extended periods can cause failures caused by memory leaks and reveal obscure contention problems or conflicts.

• Tuning the system - repeat runs of tests can be performed to verify that tuning activities are having the desired effect - improving performance.

• Verifying resilience and reliability - executing tests at production loads for extended periods is the only way to assess the system's resilience and reliability to ensure required service levels are likely to be met.

The test infrastructure can be used to address all these objectives and other variation on these themes. A comprehensive test strategy would define a test infrastructure to enable all these objectives to be met.

A Practitioner's Guide to Software Testing

Page 59 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Pre-Requisites for Performance Testing

We can identify five pre-requisites for a performance test. Not all of these need be in place prior to planning or preparing the test (although this might be helpful), but rather, the list below defines what is required before a test can be executed.

Quantitative, Relevant, Measurable, Realistic, Achievable Requirements

As a foundation to all tests, performance objectives, or requirements, should be agreed prior to the test so that a determination of whether the system meets requirements can be made. Requirements for system throughput or response times, in order to be useful as a baseline to compare performance results, should have the following attributes. They must be:

• Quantitative - expressed in quantifiable terms such that when response times are measured, a sensible comparison can be made. For example, response time requirements should be expressed as a number of seconds, minutes or hours.

• Relevant - a response time must be relevant to a business process. For example, a response time might be defined within the context of a telesales operator capturing customer enquiry details and so should be suitably quick, or a report generated as part of a monthly management reporting process and which might have an acceptable delay of ten minutes.

• Measurable - a response time should be defined such that it can be measured using a tool or stopwatch and at reasonable cost. It will not be practical to measure the response time of every transaction in the system in the finest detail.

• Realistic - response time requirements should be justifiable when compared with the durations of the activities within the business process the system supports. Clearly, it is not reasonable to demand sub-second response times for every system function, where some functions relate to monthly or occasional business processes which might actually take many minutes or hours to prepare or complete.

• Achievable - response times should take some account of the cost of achieving them. There is little point in agreeing to response times which are clearly unachievable for a reasonable cost (i.e. within the budget for the system).

Stable System

A test team attempting to construct a performance test of a system whose software is of poor quality is unlikely to be successful. If the software crashes regularly it will probably not withstand the relatively minor stress of repeated use. Testers will not be able to record scripts in the first instance, or may not be able to execute a test for a reasonable length of time before the software, middleware or operating systems crash. Performance tests stress all architectural components to some degree, but for performance testing to produce useful results the system infrastructure should be both reliable and resilient.

Realistic Test Environment

The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. Often this is not possible. However, for the results of the test to be useful, the test environment should be comparable to the final production environment. Even with an environment which is somewhat different from the production environment, it should still be possible to interpret the results obtained using a model of the system to predict, with some confidence, the behaviour of the target environment. A test environment which bears no similarity to the final environment may be useful for finding obscure errors in the code, but is, however, useless for a performance test.

A Practitioner's Guide to Software Testing

Page 60 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

A simple example where a compromise might be acceptable would be where only one server is available for testing but where the final architecture will balance the load between two identical servers. Reducing the load imposed to half during the test might provide a good test from the point of view of a server, but might, however, understate the load on the network. In all cases, the compromise environment to be used should be discussed with the technical architect who may be able to provide the required interpretations.

The performance test will be built to provide loads which simulate defined load profiles and can also be adjusted to impose higher loads. If the environment is such that, say, a 20% error in any results obtained from tests are expected, extra confidence may be gained by adjusting the load imposed by 20% (or more) to see if performance is still acceptable. Although not entirely scientific, such tests should increase confidence in the final system as delivered if the tests show performance to be acceptable.

Controlled Test Environment

Performance testers require stability not only in the hardware and software in terms of its reliability and resilience, but also need changes in the environment or software under test to be minimized. Automated scripts are extremely sensitive to changes in the behaviors of the software under test. Test scripts designed to drive client software GUIs are prone to fail immediately, if the interface is changed even slightly. Changes in the operating system environment or database are equally likely to disrupt test preparation as well as execution and should be strictly controlled. The test team should ideally have the ability to refuse and postpone upgrades in any component of the architecture until they are ready to incorporate changes to their tests. Changes intended to improve performance or the reliability of the environment would normally be accepted as they become available.

Performance Testing Toolkit

The execution of a performance test must be, by its nature, completely automated. However, there are requirements for tools throughout the test process. Test tools are considered in more detail later, but the five main tool requirements for our `Performance Testing Toolkit' are summarized here:

• Test Database Creation/Maintenance - to create the large volumes of data on the database which will be required for the test. Usually SQL or `Procedural SQL' database tools.

• Load generation - tools can be of two basic types, either a test running tool which drives the client application, or a test driver which simulates clients workstations.

• Application Running Tool - test running tool which drives the application under test and records response time measurements. (May be the same tool used for load generation).

• Resource Monitoring - utilities which can monitor and log both client and server system resources, network traffic, database activity.

• Results Analysis and Reporting - test running and resource monitoring tools can capture large volumes of results data.

Although many such tools offer facilities for analysis, it is often useful to be able to combine results from these various sources and produce combined summary test reports. This can usually be achieved using PC spreadsheet, database and word processing tools.

A Practitioner's Guide to Software Testing

Page 61 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Performance Requirements

Performance requirements normally comprise three components:

• Response time requirements. • Transaction volumes detailed in `Load Profiles'. • Database volumes.

Response Time Requirements

When asked to specify performance requirements, users normally focus attention on response times, and often wish to define requirements in terms of generic response times. A single response time requirement for all transactions might be simple to define from the users point of view, but is unreasonable. Some functions are critical and require short response times, but others are less critical and response time requirements can be less stringent.

Some guidelines for defining response time requirements are presented here:

• For an accurate representation of the performance experienced by a live user, response times should be defined as the period between a user requesting the system to do something (e.g. clicking on a button) to the system returning control to the user.

• Requirements can often vary in criticality according to the different business scenarios envisaged. As a consequence, quick responses are not always required. Business scenarios are often matched with load profiles.

• Generic requirements are described as `catch all' thresholds. Examples of generic requirements are times to `perform a screen update', `scroll through a page of data', `navigate between screens'

• Specific requirements define the requirements for identified system transactions. Examples would be the time `to register a new purchase order in screen A0101'

• Response times for specific system functions should be considered in the context of the business process the system supports. As a rule of thumb, if a business process is of short duration, e.g. logging a customer call, response times should be suitably brief. If a business process is of longer duration, e.g. preparing a monthly report, longer delays ought to be acceptable.

• Requirements are usually specified in terms of acceptable maximum, average or 95 percentile times.

Response times should be broken down into types: generic and specific, where appropriate. Generic response times can be defined for system updates, queries or reports and are often qualified by complexity. Response time requirements for specific system functions should be stated separately.

The test team should set out to measure response times for all specific requirements and a selection of transactions which provide two or three examples of generic requirements.

Load Profiles

The second component of performance requirements is a schedule of load profiles. A load profile is a definition of the level of system loading expected to occur during a specific business scenario. Business scenarios might cover different situations when the users' organization has different levels of activity or involve a varying mix of activities which must be supported by the system.

A Practitioner's Guide to Software Testing

Page 62 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Examples of business scenarios might be:

• Average load, busy hour, busy 5 minutes - useful where the mix of activities is relatively constant, but the volume of tasks undertaken varies.

• Normal, end of month, end of year - where an organization’s activities change over time with peaks occurring at specific periods.

• Quiescent, local fault, widespread emergency - where a support organization might have quiet periods interspersed with occasional peaks and must cater for 1 in 200 year disasters.

A comprehensive load profile specification will identify the following for each business scenario:

• User types or roles. • Identification of all locations. • Distribution (numbers) of users of each type at each location. • Business processes (or system transactions) performed by each user type at each location and the

estimated transaction rate.

Table 2 below is an extract from a typical load profile specification.

Scenario: Major Fault

ID Transaction User Type Users Location TXN rate 23 Log Customer Fault xxxxxx

xxxxxx xxxxxx

100 80 140

BHM BTL WEM

20/hr 15/hr 25/hr

24 Allocate Fault Xxxxx1 xxxxx1

5 7

BHM WEM

10/hr 14/hr

25 Escalate Fault xxxx leader xxxx leader

10 10

BHM WEM

5/hr 10/hr

26 Clear Fault xFault Controller xFault Controller

5 7

BHM WEM

10/hr 14/hr

Table 2. Example Load Profile.

Database Volumes

Data volumes, defining the numbers of table rows which should be present in the database after a specified period of live running complete the load profile. Typically, data volumes estimated to exist after one years use of the system are used, but two year volumes or greater might be used in some circumstances, depending on the business application.

Process

We can identify a four stage test process. An additional stage, tuning, can be identified. Tuning can be compared with the bug fixing activity that usually accompanies functional test activities. Tuning may involve changes to the architectural infrastructure and often does not affect the functionality of the system under test. A schematic of the test process is presented in Figure 1 below. The five stages in the process are described in outline in Figure 2.

A Practitioner's Guide to Software Testing

Page 63 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

High Level Performance Test Process.

Performance Test Process Outline.

Specification

• Documentation of performance requirements including: o database volumes o load profiles having relevance to the business o response time requirements.

• Preparation of a schedule of load profile tests to be performed (e.g. normal, busy hour, busy 5 minutes or some other scheme).

• Inventory of system transactions comprising the loads to be tested. • Inventory of system transactions to be executed and response times measured. • Description of analyses and reports to be produced.

Preparation

• Preparation of a test database with appropriate data volumes. • Scripting of system transactions to comprise the load. • Scripting of system transactions whose response is to be measured (possibly the same as the load

transactions). • Development of Workload Definitions (i.e. the implementations of Load Profiles). • Preparation of test data to parameterize automated scripts.

Execution

• Execution of interim tests. • Execution of performance tests. • Repeat test runs, as required.

Analysis

• Collection and archiving of test results. • Preparation of tabular and graphical analyses. • Preparation of reports including interpretation and recommendations.

A Practitioner's Guide to Software Testing

Page 64 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Tuning

• Sundry changes to application software, middleware, database organisation. • Changes to server system parameters. • Upgrades to client or server hardware, network capacity or routing.

Incremental Test Development

Test development is usually performed incrementally and follows a RAD-like process. The process has four stages:

• Each test script is prepared and tested in isolation to debug it. • Scripts are integrated into the development version of the workload and the workload is executed to test

that the new script is compatible. • As the workload grows, the developing test framework is continually refined, debugged and made more

reliable. Experience and familiarity with the tools also grows, and the process used is fine-tuned. • When the last script is integrated into the workload, the test is executed as a `dry run' to ensure it is

completely repeatable and reliable, and ready for the formal tests.

Interim tests can provide useful results:

• Runs of the partial workload and test transactions may expose performance problems. These can be reported and acted upon within the development groups or by network, system or database administrators.

• Tests of low volume loads can also provide an early indication of network traffic and potential bottlenecks when the test is scaled up.

• Poor response times can be caused by poor application design and can be investigated and cleared up by the developers earlier. Inefficient SQL can also be identified and optimized.

• Repeatable test scripts can be run for extended periods as soak tests. Such tests can reveal errors, such as memory leaks, which would not normally be found during functional tests.

Test Execution

The execution of formal performance tests requires some stage management or co-ordination. As the time approaches to execute the test, team members who will execute the test as well as those who will monitor the test must be warned, well in advance. The `test monitoring' team members are often working in dispersed locations and need to be kept very well informed if the test is to run smoothly and all results are to be captured correctly. The test monitoring team members need to be aware of the time window in which the test will be run and when they should start and stop their monitoring tools. They also need to be aware of how much time they have to archive their data, pre-process it and make it available to the person who will analyse the data fully and produce the required reports.

1. Preparation of database (restore from tape, if required). 2. Prepare test environment as required and verify its state. 3. Start monitoring processes (network, clients and servers, database). 4. Start the load simulation and observe system monitor(s). 5. When the load is stable, start the application test running tool and response time measurement 6. Monitor the test closely for the duration of the test.

A Practitioner's Guide to Software Testing

Page 65 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

7. If the test running tools do not stop automatically, terminate the test when the test period ends. 8. Stop monitoring tools and save results. 9. Archive all captured results, and ensure all results data is backed up securely. 10. Produce interim reports, confer with other team members concerning any anomalies. 11. Prepare analyses and reports.

When a test run is complete, it is common for some tuning activity to be performed. If a test is a repeat test, it is essential that any changes in environment are recorded, so that any differences in system behaviors, and hence performance results can be matched with the changes in configuration. As a rule, it is wise to change only one thing at a time so that when differences in behaviors are detected, they can be traced back to the changes made.

Results Analysis and Reporting

The application test running tool will capture a series of response times for each transaction executed. The most typical report for a test run will summaries these measurements and for each measurement taken the following will be reported:

• The count of measurements. • Minimum response time. • Maximum response time. • Mean response time. • 95th percentile response time.

The 95th percentile, it should be noted, is the time within which 95 percent of the measurements occur. Other percentiles are sometimes used, but this depends on the format of the response time requirements. The required response times are usually presented on the same report for comparison.

The other main requirement that must be verified by the test is system throughput. The load generation tool should record the count of each transaction type for the period of the test. Dividing these counts by the duration of the test gives the transaction rate or throughput actually achieved. These rates should match the load profile simulated - but might not if the system responds slowly. If the transaction load rate depends on delays between transactions, a slow response will increase the delay between transactions and slow the rate. The throughput will also be less than intended if the system simply cannot support the load applied.

It is common to execute a series of test runs at varying load. Using the results of a series of tests, a graph of response time for a transaction plotted against the load applied can be prepared. Such graphs provide an indication of the rate of degradation in performance as load is increased, and the maximum throughput that can be achieved, while providing acceptable response times.

Where a test driver is used to submit SQL statements to the database server across the network, the response times of each individual SQL statement can be recorded. A report of SQL statements in descending order of response time is a very good indicator of those SQL statements which would benefit from some optimisation and database tables or views which may not have been correctly defined (e.g. indices not set up).

Resource monitoring tools usually have statistical or graphical reporting facilities which plot resource usage over time. Enhanced reports of resource usage versus load applied are very useful, and can assist identification of bottlenecks in a system architecture

A Practitioner's Guide to Software Testing

Page 66 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

The Risk-Based Testing Process The first step in this approach is to identify the critical success factors for your client/server system.

Step 1 - Identify Critical Success Factors This is accomplished by analyzing your business needs to ask, "What attributes must this system

have to be considered a success?" For the purpose of client/server technology, QAI has identified eight critical success factors:

1. Solve the right business problem 2. Adequate capacities to satisfy user requests 3. Ease of use 4. Effective processes/standards 5. Effective integration of data, hardware and software 6. Use technology compatible with the organization's culture 7. Adequate resources to perform tasks effectively and efficiently 8. Plan for growth

Step 2 - Prioritize Critical Success Factors It is difficult to consider all of the critical success factors when assessing risk or planning the test.

Although all of the success factors play a role in the success of client/server systems, some factors will be more important to you than others.

Step 3 - Identify Risks

From the list of 10 risks, identify those risks that are present in your environment. Much time could be spent on this topic alone. For a complete treatment of the subject of risk assessment in a client/server environment, see the QAI course on Client/Server Risks.

Step 4 - Assess the Magnitude of Each Risk

Rank the magnitude of each risk from insignificant to critical.

Step 5 - Develop Test Strategy The test strategy will depend on the kinds of risks and their magnitude. The type of applications

and system architecture will also play a key role in the testing strategy. For example, some client/server systems use GUI applications, while others may use character-based applications. Other considerations use server types, database types, and user skill levels.

Step 6 - Develop a Test Plan

The test plan is a high level document that describes the background, strategy and objectives of the test. The test plan describes what is to be done, how it will be done, who will do it, and what the timeframes are. Any detail documents, such as descriptions of individual tests or test scripts, are attached at the end of the test plan.

Step 7 - Execute Tests This step is the execution of the test as described in the plan. Client/server systems must take

advantage of automated testing tools in order to keep up with the output from Rapid Application Development.

Automated scripts can be recorded based on manual scripts. When a change is made to the software, the scripts can be played back and compared to find differences.

A Practitioner's Guide to Software Testing

Page 67 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Step 8 - Evaluate Tests After the tests are complete, an evaluation must be made to determine if the system meets the

expected criteria. If the observed results do not match the expected results, either a defect exists or the expected results are not correct.

After all of the tests have been executed an overall evaluation of the test can be made. In many cases, a final test report is written. The final test report lists the findings and recommendations of the test team (if an independent team is used).

A Practitioner's Guide to Software Testing

Page 68 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

WEB Testing [Breadth of web testing]:

Typical web applications consist of a front end [GUI] and back end [DB] with application layers as a middle layer.

While testing you can plan your system testing accordingly. Lot of GUI checklists are available for which you can validate your UI.At application level, it depends on what method u prefer i.e. manual, automated, for all non functional testing tools are available.

User Interface

One of the reasons the web browser is being used as the front end to applications is the ease of use. Users who have been on the web before will probably know how to navigate a well-built web site. While you are concentrating on this portion of testing it is important to verify that the application is easy to use. Many will believe that this is the least important area to test, but if you want to be successful, the site better be easy to use.

Instructions

You want to make sure there are instructions. Even if you think the web site is simple, there will always be someone who needs some clarification. Additionally, you need to test the documentation to verify that the instructions are correct. If you follow each instruction does the expected result occur?

Site map or navigational bar Does the site have a map? Sometimes power users know exactly where they want to go and don't

want to wade through lengthy introductions. Or new users get lost easily. Either way a site map and/or an ever-present navigational bar can help guide the user. You need to verify that the site map is correct. Does each link on the map actually exist? Are there links on the site that are not represented on the map? Is the navigational bar present on every screen? Is it consistent? Does each link work on each page? Is it organized in an intuitive manner?

Content To a developer, functionality comes before wording. Anyone can slap together some fancy

mission statement later, but while they are developing, they just need some filler to verify alignment and layout. Unfortunately, text produced like this may sneak through the cracks. It is important to check with the public relations department on the exact wording of the content.

You also want to make sure the site looks professional. Overuse of bold text, big fonts and blinking (ugh) can turn away a customer quickly. It might be a good idea to consult a graphic designer to look over the site during User Acceptance Testing. You wouldn't slap together a brochure with bold text everywhere, so you want to handle the web site with the same level of professionalism.

Finally, you want to make sure that any time a web reference is given that it is hyperlinked. Plenty of sites ask you to email them at a specific address or to download a browser from an address. But if the user can't click on it, they are going to be annoyed.

A Practitioner's Guide to Software Testing

Page 69 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Colors/backgrounds

Ever since the web became popular, everyone thinks they are graphic designers. Unfortunately, some developers are more interested in their new backgrounds, than ease of use. Sites will have yellow text on a purple picture of a fractal pattern. (If you've never seen this, try most sites at GeoCities or AOL.) This may seem "pretty neat", but it's not easy to use.

Usually, the best idea is to use little or no background. If you have a background, it might be a single color on the left side of the page, containing the navigational bar. But, patterns and pictures distract the user. Images

Whether it's a screen grab or a little icon that points the way, a picture is worth a thousand words. Sometimes, the best way to tell the user something is to simply show them. However, bandwidth is precious to the client and the server, so you need to conserve memory usage. Do all the images add value to each page, or do they simply waste bandwidth? Can a different file type (.GIF, .JPG) be used for 30k less?

In general, you don't want large pictures on the front page, since most users who abandon a page due to a large load will do it on the front page. If you can get them to see the front page quickly, it will increase the chance they will stay.

Tables

You also want to verify that tables are setup properly. Does the user constantly have to scroll right to see the price of the item? Would it be more effective to put the price closer to the left and put miniscule details to the right? Are the columns wide enough or does every row have to wrap around? Are certain columns considerably longer than others?

Wrap-around Finally, you will want to verify that wrap-around occurs properly. If the text refers to "a picture

on the right", make sure the picture is on the right. Make sure that widowed and orphaned sentences and paragraphs don't layout in an awkward manner because of pictures.

Functionality

The functionality of the web site is why your company hired a developer and not just an artist. This is the part that interfaces with the server and actually "does stuff".

Links A link is the vehicle that gets the user from page to page. You will need to verify two things for each link: that the link brings you to the page it said it would and that the pages you are linking to actually exists. It may sound a little silly but I have seen plenty of web sites with internal broken links.

Forms When a user submits information through a form it needs to work properly. The submit button needs to work. If the form is for an online registration, the user should be given login information (that works) after successful completion. If the form gathers shipping information, it should be handled properly and the customer should receive their package. In order to test this, you need to verify that the server stores the information properly and that systems down the line can interpret and use that information.

A Practitioner's Guide to Software Testing

Page 70 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Data verification If the system verifies user input according to business rules, then that needs to work properly. For example, a State field may be checked against a list of valid values. If this is the case, you need to verify that the list is complete and that the program actually calls the list properly (add a bogus value to the list and make sure the system accepts it).

Cookies Most users only like the kind with sugar, but developers love web cookies. If the system uses them, you need to check them. If they store login information, make sure the cookies work. If the cookie is used for statistics, verify that totals are being counted properly. And you'll probably want to make sure those cookies are encrypted too, otherwise people can edit their cookies and skew your statistics.

Application specific functional requirements Most importantly, you want to verify the application specific functional requirements. Try to perform all functions a user would: place an order, change an order, cancel an order, check the status of the order, change shipping information before an order is shipped, pay online, ad n assume.

This is why your users will show up on your doorstep, so you need to make sure you can do what you advertise. Interface Testing Many times, a web site is not an island. The site will call external servers for additional data, verification of data or fulfillment of orders.

Server interface The first interface you should test is the interface between the browser and the server. You should attempt transactions, then view the server logs and verify that what you're seeing in the browser is actually happening on the server. It's also a good idea to run queries on the database to make sure the transaction data is being stored properly.

External interfaces Some web systems have external interfaces. For example, a merchant might verify credit card transactions real-time in order to reduce fraud. You will need to send several test transactions using the web interface. Try credit cards that are valid, invalid, and stolen. If the merchant only takes Visa and MasterCard, try using a Discover card. (A script can check the first digit of the credit card number: 3 for American Express, 4 for Visa, 5 for MasterCard, or 6 for Discover, before the transaction is sent.) Basically, you want to make sure that the software can handle every possible message returned by the external server.

Error handling One of the areas left untested most often is interface error handling. Usually we try to make sure our system can handle all of our errors, but we never plan for the other systems' errors or for the unexpected. Try leaving the site mid-transaction - what happens? Does the order complete anyway? Try losing the internet connection from the user to the server. Try losing the connection from the server to the credit card verification server. Is there proper error handling for all these situations? Are charges still made to credit cards? Is the interruption is not user initiated, does the order get stored so customer service reps can call back if the user doesn't come back to the site?

A Practitioner's Guide to Software Testing

Page 71 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Compatibility You will also want to verify that the application can work on the machines your customers will be using. If the product is going to the web for the world to use, you will need to try different combinations of operating system, browser, video setting and modem speed.

Operating systems Does the site work for both MAC and IBM-Compatibles? Some fonts are not available on both systems, so make sure that secondary fonts are selected. Make sure that the site doesn't use plug-ins only available for one OS, if your users will use both.

Browsers Does your site work with Netscape? Internet Explorer? Lynx? Some HTML commands or scripts only work for certain browsers. Make sure there are alternate tags for images, in case someone is using a text browser. If you're using SSL security, you only need to check browsers 3.0 and higher, but verify that there is a message for those using older browsers.

Video settings Does the layout still look good on 640x400 or 600x800? Are fonts too small to read? Are they too big? Does all the text and graphic alignment still work?

Modem/connection speeds Does it take 10 minutes to load a page with a 28.8 modem, but you tested hooked up to a T1? Users will expect long download times when they are grabbing documents or demos, but not on the front page. Make sure that the images aren't too large. Make sure that marketing didn't put 50k of font size -6 keywords for search engines.

Printers Users like to print. The concept behind the web should save paper and reduce printing, but most people would rather read on paper than on the screen. So, you need to verify that the pages print properly. Sometimes images and text align on the screen differently than on the printed page. You need to at least verify that order confirmation screens can be printed properly.

Combinations Now you get to try combinations. Maybe 600x800 looks good on the MAC but not on the IBM. Maybe IBM with Netscape works, but not with Lynx.

If the web site will be used internally it might make testing a little easier. If the company has an official web browser choice, then you just need to verify that it works for that browser. If everyone has a T1 connection, then you might not need to check load times. (But keep in mind, some people may dial in from home.) With internal applications, the development team can make disclaimers about system requirements and only support those systems setups. But, ideally, the site should work on all machines so you don't limit growth and changes in the future. Load/Stress You will need to verify that the system can handle a large number of users at the same time, a large amount of data from each user, and a long period of continuous use. Accessibility is extremely important to users. If they get a "busy signal", they hang up and call the competition. Not only must the system be checked so your customers can gain access, but many times crackers will attempt to gain access to a system by overloading it. For the sake of security, your system needs to know what to do when it's overloaded and not simply blow up.

A Practitioner's Guide to Software Testing

Page 72 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Many users at the same time If the site just put up the results of a national lottery, it better be able to handle millions of users right after the winning numbers are posted. A load test tool would be able to simulate large number of users accessing the site at the same time.

Large amount of data from each user Most customers may only order 1-5 books from your new online bookstore, but what if a university bookstore decides to order 5000 different books? Or what if grandma wants to send a gift to each of her 50 grandchildren for Christmas (separate mailing addresses for each, of course.) Can your system handle large amounts of data from a single user?

Long period of continuous use If the site is intended to take orders for flower deliveries, then it better be able to handle the week before Mother's Day. If the site offers web-based email, it better be able to run for months or even years, without downtimes.

You will probably want to use an automated test tool to implement these types of tests, since they are difficult to do manually. Imagine coordinating 100 people to hit the site at the same time. Now try 100,000 people. Generally, the tool will pay for itself the second or third time you use it. Once the tool is set up, running another test is just a click away. Security Even if you aren't accepting credit card payments, security is very important. The web site will be the only exposure some customers have to your company. And, if that exposure is a hacked page, they won't feel safe doing business with you.

Directory setup The most elementary step of web security is proper setup of directories. Each directory should have an index.html or main.html page so a directory listing doesn't appear.

One company I was consulting for didn't observe this principal. I right clicked on an image and found the path "...com/objects/images". I went to that directory manually and found a complete listing of the images on that site. That wasn't too important. Next, I went to the directory below that: "...com/objects" and I hit the jackpot. There were plenty of goodies, but what caught my eye were the historical pages. They had changed their prices every month and kept the old pages. I browsed around and could figure out their profit margin and how low they were willing to go on a contract. If a potential customer did a little browsing first, they would have had a definite advantage at the bargaining table.

SSL secure socket layer

Many sites use SSL for secure transactions. You know you entered an SSL site because there will be a browser warning and the HTTP in the location field on the browser will change to HTTPS. If your development group uses SSL you need to make sure there is an alternate page for browser with versions less than 3.0, since SSL is not compatible with those browsers. You also need to make sure that there are warnings when you enter and leave the secured site. Is there a timeout limit? What happens if the user tries a transaction after the timeout?

Logins

In order to validate users, several sites require customers to login. This makes it easier for the customer since they don't have to re-enter personal information every time. You need to verify that the system does not allow invalid usernames/password and that it does allow valid logins. Is there a maximum number of failed

A Practitioner's Guide to Software Testing

Page 73 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

logins allowed before the server locks out the current user? Is the lockout based on IP? What if the maximum failed login attempts is three, and you try three, but then enter a valid login? What are the rules for password selection?

Log files Behind the scenes, you will need to verify that server logs are working properly. Does the log track every transaction? Does it track unsuccessful login attempts? Does it only track stolen credit card usage? What does it store for each transaction? IP address? User name?

Scripting languages Scripting languages are a constant source of security holes. The details are different for each language. Some exploits allow access to the root directory. Others allow access to the mail server. Find out what scripting languages are being used and research the loopholes. It might also be a good idea to subscribe to a security newsgroup that discusses the language you will be testing.

The Cache If a defect can't be reproduced, it may be because the page is cached and the defect only occurs under initial conditions where the page comes from the server. To check for this, clear the browser's cache often.

Frames Frames can be good and bad. They look good and help organize a page, but there can be problems

bookmaking pages in frames and if poorly designed, frames can lead to confusing navigation. Some issues to consider include: Do frames resize automatically and appropriately? Is the user able to manipulate frame size? Does a scrollbar appear if required? What is actually recognized by the Bookmark or Favorites feature? Are all pages which are included in the frameset reliable? Can a search engine find content within the frames? Do the frame borders look good? Are there any "Refresh" issues?

Animation Animation is popular and can bring a page to life. Check for: Smooth animation Flashing during

animation Amount of animation (too much can be distracting to your user)

Graphics Graphics should be small enough that they do not cause prolonged download times. If the design is such

that images fade-in using interlaced GIFs or progressive JPEGs, test to make sure the image is completely painted.

Test Cycles are Frequent with Web sites and Web Applications Testers must keep abreast of emerging Internet technologies, and must know the issues that accompany

each technology and recognize which browsers support which technologies. The constant development of new technologies also means that Web sites must continually be changed and improved. This in turn means that there will be many development and test cycles. This requires that you are aware of the changes that are made to your site and have good configuration management and change management control. Start testing as early in the development process as possible. Ideally, this means you would be able to review the specifications before any coding has begun. Because of the shift toward e-commerce and Web-based software, testing Web sites is more complex and more important than ever. By following the suggestions outlined in this article, you'll be well on your way to keeping your site visitors coming back.

A Practitioner's Guide to Software Testing

Page 74 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

WEB SITE COOKIE TESTING:

The protocol used for exchanging files on the Web is stateless, but maintaining state is essential for most websites. To maintain state, one option that Web developers have is to use cookies. This paper provides a technical background and real-world examples to help you understand how cookies work and how to test systems that employ cookies. The amazon.com website is used as a real-world example to demonstrate cookie testing techniques. Stateless, Stateful Systems

According to whatis.com, a stateless system has “no record of previous interactions and each interaction request has to be handled based entirely on information that comes with it.” On the flip side, a stateful system does keep record of previous interactions.

To elaborate on stateless systems, we will consider the Hypertext Transfer Protocol (HTTP), which is the protocol used to exchange files on the World Wide Web.

The Stateless HTTP If you enter http://www.testwareinc.com into your Web browser’s address bar and press Enter, the

conversation between your browser and Testware’s Web server over HTTP goes like this:

Your browser: “Hey Testware Web server! Can I please have the page http://www.testwareinc.com/index.html?”

Testware Web server: “Yes. The document you requested does exist.”

Testware Web server: “Here is the text of the document: <document text follows>.”

Once your browser receives the last byte of the index.html page using HTTP, the Testware Web server essentially “forgets” about what you did. If you now go elsewhere on the Testware website, the Testware Web server responds to your new request as above, without memory of your earlier request. This isn’t a bad thing for the Testware website; no harm, no foul. It does not need to know anything about your earlier request to respond to your new request. But are there cases in which state does matter for a Web-based system?

To State or Not To State on the Web As we’ll see, state certainly does matter on the Web! Take everyone’s favorite online purveyor of books and music, amazon.com. If there weren’t ways to overcome the stateless nature of HTTP and maintain state on the Web, the following would not be possible:

• The nice “Hello, <your name>” message that greets returning shoppers on the Amazon home page. Without state, how could the site have any knowledge of your name?

A Practitioner's Guide to Software Testing

Page 75 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

The virtual shopping cart. In the absence of state, how could Amazon keep track of the individual items I add to my cart and the quantity of each item?

So, how does Amazon work around the stateless HTTP protocol to accomplish the “magic” above? Part

of the answer is cookies.

Maintaining State with Cookies Whatis.com notes that a cookie is “information a Web site puts on your hard disk so that it can remember something about you at a later time.” Why? “Each request for a Web page is independent of all other requests. For this reason, the Web page server has no memory of what pages it has sent to a user previously or anything about your previous visits.”

How and where cookies are stored depends on the browser and operating system you use. Internet Explorer (IE) stores each cookie in a separate file, usually underneath the Windows operating system folder. Netscape (NS) stores all cookies in a single file named cookies.txt, usually in a folder underneath the browser’s installation folder.

Per-Session Cookies and Cookie Expiration When a Web server sets a cookie on your system, it can optionally give that cookie an expiration date. As time marches on, any cookies with expiration dates in the “past” are deleted.

If the Web server does not give a cookie an expiration date, that cookie is a per-session cookie. Per-session cookies are deleted when you close your Web browser; they only exist for the single Web surfing session beginning when you start the browser and ending when you close the browser.

Cookie Detective Work How can you tell if the Web system you are testing uses cookies? Simply read the website design documents, functional specs, etc. – if such documents are available. A more direct approach, especially useful in the likely absence of such documentation, is:

• Find the folder on your PC where cookies are stored.

A Practitioner's Guide to Software Testing

Page 76 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

• Delete all of the existing cookies. In Internet Explorer, the cache files are stored in the same folder as the cookies. Clearing the browser cache in IE can make finding the cookies easier, but isn’t strictly necessary. • Set your browser’s cookie options to “prompt me” In Internet Explorer, choose Tools | Internet Options, navigate to the Security tab, click Custom Level and select the “Prompt” radio button under “Allow cookies that are stored on your computer”. Also do the same under “Allow per-session cookies (not stored).”

In Netscape, choose Edit | Preferences, select Advanced and check the “Warn me before accepting a cookie” box.

• Navigate through all of the major features and functions on the site to see where cookies are employed. • How do you know where cookies are used? Whenever the site attempts to record state information in a cookie on your PC, you will be prompted with a message. Internet Explorer’s prompt looks like this:

A Practitioner's Guide to Software Testing

Page 77 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Netscape uses the following prompt:

• Every time this dialog appears, record the cookie details and what action(s) cause the cookie to be created or modified. Then, click Yes to accept the cookie. Personally, I find it easier to accept the cookie, open the cookie file and copy/paste the cookie details into a “cookie log” with my observations for later analysis. Save this data, including the cookie names and contents, creating a log of cookie activity correlated to your activities on the website. A word of warning: some sites are highly active with cookies, setting or modifying them on every page you visit. Creating the cookie log on these types of sites will be time consuming and drive you to a certain level of insanity. Getting as much info as possible in advance about cookie activity from the developers is usually your best bet.

A Practitioner's Guide to Software Testing

Page 78 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Cookie Usage by Amazon.com Let’s make the cookie concepts we’ve discussed so far more concrete by examining how amazon.com

uses cookies. In doing so, we will also encounter a common problem in “cookie testing” – figuring out what the hieroglyphic-like information in the cookie means!

To start, I deleted all Netscape cookies from my PC and set the cookie option to prompt me. Next, I navigated to www.amazon.com.

The first cookie activity by the Amazon Web server was to create a cookie (in the cookies.txt file) with the following data.

.amazon.com TRUE / FALSE 994320128 session-id 102-7224116-8052958

The prompt that Netscape presented me with indicated the cookie will expire on Thursday July 5, 2001, one week from today’s date as I write this. (We’ll explore the details in the cookie in the next two sections.)

The second cookie set by Amazon contained the following data and also expires on 7/5/2001.

.amazon.com TRUE / FALSE 994320181 session-id-time 994320000

Amazon’s third cookie contained the following and expires on 1/1/2036. My laptop will be reduced to either paperweight or landfill status by then, so this is pretty much a “permanent cookie” relative to the useful life of my laptop.

.amazon.com TRUE / FALSE 2082787330 ubid-main 077-4356846-2652328

The fourth cookie is a per-session cookie, since the Netscape prompt did not include an expiration date. Since per-session cookies aren’t written to the hard drive, examining the cookie content can be done only through the actual Netscape prompt.

The fifth cookie Amazon set expires on 1/1/2036 and contained the following data.

.amazon.com TRUE / FALSE 2082787787 x-main hQFiIxHUFj8mCscT@Yb5Z7xsVsOFQjBf

After accepting this fifth cookie, the amazon.com home page (finally!) displayed. The URL of the home page was http://www.amazon.com/exec/obidos/subst/home/home.html/102-7224116-8052958.

Have we seen that number sequence at the end of the URL before? Yes, it’s the session ID stored in the first cookie.

A sixth cookie containing the following data and expiring on 6/29/2001 was then set.

www.amazon.com FALSE / FALSE 993797034 seen pop 1 upon accepting this cookie, a secondary browser window popped up with a free shipping promotion notice. A logical guess at this cookie’s purpose, then, would be that it tracks whether or not you’ve seen the promotion popup ad.

A Practitioner's Guide to Software Testing

Page 79 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

After all of these cookies were set, my Netscape cookies.txt file looked like this:

Why are there are only five cookies in the file? The per-session cookie is kept in memory only; it is not written to the cookies.txt file.

What’s Inside a Cookie? Before we attempt to analyze all of the cookies set by amazon.com, let’s take a quick look at cookie structure and the meaning of cookie data.

The first cookie set by Amazon was:

.amazon.com TRUE / FALSE 994320128 session-id 102-7224116-8052958

Using the information at www.cookiecentral.com, I’ll break the cookie down into its individual fields from left to right and describe what each field is used for.

• .amazon.com is the domain this cookie is valid for. Only cookies set by machines in the amazon.com domain can read this cookie. (However, bugs in Web browser cookie implementation have allowed unauthorized sites to access cookies in the past.)

• TRUE is a flag indicating whether or not all machines in the domain can access the cookie.

• / is the path the cookie is valid for.

• FALSE is a secure flag indicating whether or not a secure (encrypted) connection is needed to access the cookie.

• 994320128 is the UNIX expiration time of the cookie. UNIX time is the number of seconds since January 1, 1970 00:00:00 GMT.

• session-id is the name of the variable stored by this cookie.

• 102-7224116-8052958 is the value of this variable.

Amazon.com Cookie Analysis Our cookie experiment for amazon.com showed us that simply loading the Amazon home page creates six cookies – one per-session (non-persistent) cookie and five persistent cookies. Since the site design documents and developers are unavailable to us, let’s put the cookie data into a table and try to decipher what the cookies are used for and the meaning of the cookie data. We will consider only the first five cookies, since we determined the purpose of the sixth cookie above.

A Practitioner's Guide to Software Testing

Page 80 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

cookie # domain accessible by all machines

path secure connection needed

expiration name value

1 .amazon.com

TRUE / FALSE 994320128

session-id 102-7224116-8052958

2 .amazon.com

TRUE / FALSE 994320181

session-id-time

994320000

3 .amazon.com

TRUE / FALSE 2082787330

ubid-main 077-4356846-2652328

4 .amazon.com

TRUE / FALSE (per-session cookie)

obidos_path

(see above)

5 .amazon.com

TRUE / FALSE 2082787787

x-main hQFiIxHUFj8mCscT@Yb5Z7xsVsOFQjBf

The first cookie is a session ID assigned to my shopping session by the Amazon server. The primary giveaway here is the variable name “session-id”. Another clue is that the data in its value field, 102-7224116-8052958, can be found at the end of the home page URL visible after the 5th cookie was set, www.amazon.com/…/home.html/102-7224116-8052958. Cookie 1 expires on 7/5/2001, based on the warning dialog I saw in Netscape before the cookie was set. So, the UNIX expiration time 994320128 in this cookie must correspond to 7/5/2001.

The second cookie’s purpose isn’t obvious. Based on its name and value, session-id-time and 994320000, respectively, I would guess it is the maximum possible “end” time in UNIX time of my amazon.com session. I know from the Netscape warning above that this cookie expires on 7/5/2001, so I can infer that the expiration value of 994320181 in this cookie corresponds to 7/5/2001. Why? These two UNIX times are only 994320181 – 994320000 = 181 = ~ 3 minutes apart.

The purpose of cookies 3 and 5 is yet even harder to decipher. The names of cookies 3 and 5, ubid-main and x-main, don’t lend us any immediate understanding. Both of these cookies expire in 2036, so whatever Amazon is tracking here, it must be of long-term use.

The fourth cookie, the only per-session / non-persistent cookie, contains a long value with the substring “continue-shopping-url”. I would guess this cookie value tells the Amazon Web server where to send me if I click the “Continue Shopping” button on the shopping cart page. As before, I’m having a hard time figuring out with certainty what this cookie is used for without doing further investigation.

I’d have to talk to the Amazon developers or get access to some design documents or specifications to get any further here.

A Practitioner's Guide to Software Testing

Page 81 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Cookie Testing

Now that we’re in the know about what cookies are, how they’re used to provide state in Web systems and what cookie contents look like, let’s address how to test sites that use cookies.

Disabling Cookies

This is probably the easiest area of cookie testing. What happens to the web site if all cookies are disabled? Start by closing all instances of your browser and deleting all cookies from your PC set by the site under test. The cookie file is kept open by the browser while it’s running, so you must close the browser to delete the cookies. Closing the browser also removes any per-session cookies in memory.

Disable all cookies and attempt to use the site’s major features and functions. Most of the time, you will find that these don’t work, since cookies are disabled. This isn’t a bug, but rather a fact of life: disabling cookies on a site that requires cookies (of course!) disables the site’s functionality.

With cookies disabled, your testing job is somewhat reduced. Is it obvious to the website user that he must have cookies enabled to use the site? Is the Web server recognizing that its attempts to set cookies are failing? If so, does it send a page to the user stating, in plain language, that cookies must be enabled for the site to work? Or, can the user frustratingly attempt the same operation many times in a row without a clue as to why the site isn’t working?

Amazon.com passes this test with flying colors. I was able to use all major aspects of the site – searching, shopping cart, checkout functions – even though cookies were completely disabled. I’d bet that state maintenance was being taken care of server-side, based on the session ID at the end of the home page URL. Let’s test this hypothesis. The home page URL was www.amazon.com/…/home.html/104-0274809-0482344. If I change the rightmost digit from 4 to 5 and repost the URL, Amazon discards my edited URL and “recovers” from the corruption by creating a URL with a new session ID, www.amazon.com/…/home.html/107-0357560-1728507. So, it appears that the hypothesis is correct.

Let’s probe a little further. I chose the Yamaha CD-ROM kit on the Amazon home page and added it to my shopping cart. The shopping cart page URL was www.amazon.com/…/one-click-thank-you-confirm/107-0357560-1728507. Changing the rightmost digit from 7 to 8 and posting this edited URL lost my shopping cart and brought up the following error page, lending further support to the hypothesis of server-side state maintenance with a session ID in the URL.

A Practitioner's Guide to Software Testing

Page 82 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

This server-side state maintenance allows someone to shop at amazon.com even if they have totally disabled cookies – an intelligent design. If cookies are enabled, though, we saw previously that Amazon sets a session ID cookie to “remember” your session ID. Why? If you leave the site with a non-empty shopping cart and then return, the session ID cookie is used to resume your previous shopping session and shopping cart state.

Selectively Rejecting Cookies

What happens to the site if some cookies are accepted and others are rejected? Start by deleting all cookies from your PC set by the site under test and set your browser’s cookie option to prompt you whenever a web site attempts to set a cookie. Exercise the site’s major functions. You will be prompted for each and every cookie the site attempts to set. Accept some and reject others. (Analyze site cookie usage in advance and draw up a test plan detailing what cookies to reject/accept for each function.) How does the site hold up under this selective cookie rejection? As above, does the Web server detect that certain cookies are being rejected and respond with an appropriate message? Or, does the site malfunction, crash, corrupt data, or misbehave in other ways?

Let’s strategize a selective cookie rejection test for the amazon.com home page. Each test case will require either accepting or rejecting each of the six cookies, so there are 2^6 = 64 possible test cases. A subset of the test cases is enumerated in the following table.

test case # cookie 1

(persistent) cookie 2 (persistent)

cookie 3 (persistent)

cookie 4 (per session)

cookie 5 (persistent)

cookie 6 (persistent)

1 reject reject reject reject reject reject

2 reject reject reject reject reject accept

3 reject reject reject reject accept reject

4 reject reject reject reject accept accept

5 reject reject reject Accept reject accept

64 accept accept accept Accept accept accept

If I were to run the fifth test case, for example, I would reject the first three cookies when amazon.com tries to set them, but allow Amazon to set the fourth, fifth and sixth cookies.

The first test case is equivalent to the disabling cookies test performed previously, but I’ll leave it in the table for completeness. If you think in binary and consider reject to be 0 and accept to be 1, the table has binary representations of the decimal numbers 0 through 63 inclusive.

Based on Amazon’s performance in the disabling cookies test, I would guess that the site would pass most or nearly all of the selective cookie rejection test cases. I executed test cases 2 and 5, closing the browser and deleting the cookies before starting each test case. Both passed; I was able to use the site’s major functions, as above, without problem. Looks like the site designers ensured that “problems” with cookies would have little or no effect on a customer’s ability to shop at amazon.com.

Note that the test cases above only deal with the cookies being rejected or accepted when amazon.com first tries to create them. We also should test rejecting and accepting cookie modifications. Allow a cookie to initially be

A Practitioner's Guide to Software Testing

Page 83 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

set. If/when the Web server attempts to subsequently modify that cookie, what happens if you disallow the change, retaining the “old” value?

Corrupting Cookies

Now’s our chance to really abuse the site under test! You will first need to do the “Cookie Detective Work” mentioned above to determine how and where your site uses cookies and the meaning of the cookie data. Now, exercise the site’s major features. Along the way, as cookies are created and modified, try things like:

• Altering the data in the persistent cookies. (Since the per-session cookies are stored only in memory, they aren’t readily accessible for editing. There might be tools out there for corrupting per-session cookies, but I’m not aware of any.) Example: in the first cookie written by amazon.com, change the variable name session-id to something different, perhaps ses-id or sexqion-id. Remember, you will have to close the browser to edit the cookies. Before this edit is made, if I visit the Amazon site, close the browser, restart the browser and go back to amazon.com, my “previous” session is maintained based on the session ID in the cookie. However, if I corrupt the session ID variable name, Amazon detects the corruption and recovers by discarding all six of the cookies and recreating them with new values. After editing the cookie, restart the browser and reload/continue using the site. Did the corrupted cookie cause the site to malfunction? Is any data lost or corrupted in the database?

Second example: change the session-id value in data field by adding 1 to the rightmost digit; 102-7224116-8052958 becomes 102-7224116-8052959. Are you now looking at someone else’s shopping session? Anything lost or corrupted in the database?

• Selectively deleting cookies. Allow the cookie to be written (or modified), perform several more actions on the site, and then delete that cookie. Continue using the site. What happens? Is it easy to recover? Any data loss or corruption?

Cookie Encryption

The last cookie test I’ll mention is a simple one. While investigating cookie usage on the site you’re testing, pay particular attention to the meaning of the cookie data. Sensitive information like usernames and passwords should NOT be stored in plain text for all the world to read; this data should be encrypted before it is sent to your computer. I’ve tested many sites where this seemingly obvious rule has been violated. A case can certainly be made that certain types of sensitive data – credit card numbers, for example – should never be stored in cookies, even encrypted.

Based on the amazon.com cookie analysis we performed above, I’d say Amazon easily passes the cookie encryption test. No sensitive user or credit card information is stored in plain text. I did find a way to ‘hack’ into an account using the cookie data. On User A’s machine, I navigated to the Amazon home page and then added a book on John Adams to the shopping cart. I copied all five of the persistent cookies out of the cookies.txt file and pasted them into the cookies.txt file on User B’s machine. Navigating to the amazon.com home page on User B’s machine gave me access to User A’s shopping cart with the John Adams book.

A little further experimentation showed that editing just User B’s session ID cookie to match User A’s does not allow User B to access A’s shopping cart. This leads me to believe that one or more of the other cookies are used as part of a unique key to lessen the probability that this type of hacking would be successful.

State information can be maintained in Web systems by the use of cookies. (Other methods for maintaining state include hidden form fields and embedding state data in HTML links; I recommend that web testers explore these methods as well.) Our job as testers is to find out, by talking to developers, reading system documentation or experimenting with the web site, which of these technologies are being used and to design tests accordingly.

A Practitioner's Guide to Software Testing

Page 84 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

NETWORK TESTING While this session describes generalized network security testing that is applicable to all networked

systems, it is aimed more towards the following types of systems:

Firewalls, both internal and external פ Routers and switches פ Related network-perimeter security systems such as intrusion detection systems פ Web servers, email servers, and other application servers פ Other servers such as for Domain Name Service (DNS) or directory servers or פ

file servers (CIFS/SMB, NFS, FTP, etc.)

Example of Mission Critical Systems for Initial Testing

These systems generally should be tested first before proceeding onto testing general staff and related systems, i.e., desktop, standalone, and mobile client systems.

The tests described in this document are applicable to various stages of the system development lifecycle, and are most useful as part of a routine network security test program to be conducted while systems are running in their operational environments.

A Practitioner's Guide to Software Testing

Page 85 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

This session uses the terms system, network security testing, operational testing, and vulnerability extensively. For the purposes of this document, their definitions will be as follows:

System – A system is any of the following:

Computer system (e.g., mainframe, minicomputer) ال Network system (e.g., local area network [LAN]) ال Network domain ال Host (e.g., a computer system) ال Network nodes, routers, switches and firewalls ال .Network and/or computer application on each computer system ال

Network Security Testing

Activities that provide information about the integrity of an organization's networks and associated systems through testing and verification of network-related security controls on a regular basis. “Security Testing” or “Testing” is used throughout this document to refer to Network Security Testing.

Operational Security Testing

Network security testing conducted during the operational stage of a system’s life, that is, while the system is operating in its operational environment.

Vulnerability

A bug or misconfigurations or special sets of circumstances that could result in an exploitation of that vulnerability. For the purposes of this document, vulnerability could be exploited directly by an attacker, or indirectly through automated attacks such as Distributed Denial of Service (DDOS) attacks or by computer viruses.

SECURITY TESTING AND THE SYSTEM DEVELOPMENT LIFE CYCLEND the System Development Life Cycle

System Development Life Cycle

Evaluation of system security can and should be conducted at different stages of system

development. Security evaluation activities include, but are not limited to, risk assessment, certification and accreditation (C&A), system audits, and security testing at appropriate periods during a system’s life cycle. These activities are geared toward ensuring that the system is being developed and operated in accordance with an organization’s security policy. This section discusses how network security testing, as a security evaluation activity, fits into the system development life cycle. A typical systems lifecycle5 would include the following activities:

1. Initiation – the system is described in terms of its purpose, mission, and configuration. 2. Development and Acquisition – the system is possibly contracted and constructed according

to documented procedures and requirements.

3. Implementation and Installation – the system is installed and integrated with other

applications, usually on a network.

A Practitioner's Guide to Software Testing

Page 86 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

4. Operational and Maintenance – the system is operated and maintained according to its mission requirements.

5. Disposal – the system’s lifecycle is complete and it is deactivated and removed from the network and active use. Sy eTypically, network security testing is conducted after the system has been developed, installed, and integrated during the Implementation and Operational stages.

System Development Life Cycle

Implementation Stage

During the Implementation Stage, Security Testing and Evaluation should be conducted on

particular parts of the system and on the entire system as a whole. Security Test and Evaluation (ST&E) is an examination or analysis of the protective measures that are placed on an information system once it is fully integrated and operational. The objectives of the ST&E are to:

Uncover design, implementation and operational flaws that could allow the violation of security פpolicy

Determine the adequacy of security mechanisms, assurances and other properties to enforce the פsecurity policy

.Assess the degree of consistency between the system documentation and its implementation פ

The scope of an ST&E plan typically addresses computer security, communications security, emanations security, physical security, personnel security, administrative security, and operations security.

A Practitioner's Guide to Software Testing

Page 87 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Operational Stage

Once a system is operational, it is important to ascertain its operational status, that is, “…whether a system is operated according to its current security requirements. This includes both the actions of people who operate or use the system and the functioning of technical controls.” The tests described can be conducted to assess the operational status of the system. The types of tests selected and the frequency in which they are conducted depend on the importance of the system and the resources available for testing. These tests, however, should be repeated periodically and whenever a major change is made to the system. For systems that are exposed to constant threat (e.g., web servers) or that protect critical information (e.g., firewalls), testing should be conducted more frequently.

As shown in the Operational Stage is subdivided into two stages to include Maintenance

Stage in which the system may be temporarily off-line due to a system upgrade, configuration change, or an attack.

Testing Activities at the Operations and Maintenance Stages

During the Operational Stage, periodic operational testing is conducted During the Maintenance Stage, ST&E testing may need to be conducted just as it was during the Implementation Stage. This level of testing may also be required before the system can be returned to its operational state, depending upon the criticality of the system and its applications. For example, an important server or firewall may require full testing, whereas a desktop system may not.

Documenting Security Testing Results

Security testing provides insight into the other system development life cycle activities such as risk analysis and contingency planning. Security testing results should be documented and made available for staff involved in other IT and security related areas. Specifically, security testing results can be used in the following ways: ,As a reference point for corrective action ال

,In defining mitigation activities to address identified vulnerabilities ال As a benchmark for tracing an organization’s progress in meeting security ال

requirements, ,To assess the implementation status of system security requirements ال To conduct cost/benefit analysis for improvements to system security, and ال To enhance other life-cycle activities, such as risk assessments, Certification and ال

Authorization (C&A), and performance improvement efforts.

A Practitioner's Guide to Software Testing

Page 88 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

SECURITY TESTING TECHNIQUES There are several different types of security testing. The following section describes each testing technique, and provides additional information on the strengths and weakness of each. Some testing techniques are predominantly manual, requiring an individual to initiate and conduct the test. Other tests are highly automated and require less human involvement. Regardless of the type of testing, staff that setup and conduct security testing should have significant security and networking knowledge, including significant expertise in the following areas: network security, firewalls, intrusion detection systems, operating systems, programming and networking protocols (such as TCP/IP).

The following types of testing are described in this section: ► Network Scanning

► Vulnerability Scanning

► Password Cracking

► Log Review

► Integrity Checkers

► Virus Detection

► War Dialing

► War Driving (802.11 or wireless LAN testing)

► Penetration Testing

Often, several of these testing techniques are used together to gain more comprehensive assessment of the overall network security posture. For example, penetration testing usually includes network scanning and vulnerability scanning to identify vulnerable hosts and services that may be targeted for later penetration. Some vulnerability scanners incorporate password cracking. None of these tests by themselves will provide a complete picture of the network or its security posture.

After running any tests, certain procedures should be followed, including documenting the test

results, informing system owners of the results, and ensuring that vulnerabilities are patched or mitigated. Roles and Responsibilities for Testing

Only designated individuals, including network administrators or individuals contracted to

perform the network scanning as part of a larger series of tests, should conduct the tests described in this section. The approval for the tests may need to come from as high as the CIO depending on the extent of the testing. It would be customary for the testing organization to alert other security officers, management, and users that network mapping is taking place. Since a number of these test mimic some of the signs of attack, the appropriate manages must be notified to avoid confusion and unnecessary expense. In some cases, it may be wise to alert local law enforcement officials if, for example, the security policy included notifying law enforcement.

A Practitioner's Guide to Software Testing

Page 89 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

General Information Security Principles

When addressing security issues, some general information security principles should be kept in mind, as follows:

Simplicity—Security mechanisms (and information systems in general) should be as simple as

possible. Complexity is at the root of many security issues.

Fail-Safe—If a failure occurs, the system should fail in a secure manner. That is, if a failure occurs, security should still be enforced. It is better to lose functionality than lose security.

Complete Mediation—Rather than providing direct access to information, mediators that enforce access policy should be employed. Common examples include files system permissions, web proxies and mail gateways.

Open Design—System security should not depend on the secrecy of the implementation or it components. “Security through obscurity” does not work.

Separation of Privilege—Functions, to the degree possible, should be separate and provide as much granularity as possible. The concept can apply to both systems and operators/users. In the case of system operators and users, roles should be as separate as possible. For example if resources allow, the role of system administrator should be separate from that of the security administrator.

Psychological Acceptability—Users should understand the necessity of security. This can be provided through training and education. In addition, the security mechanisms in place should present users with sensible options that will give them the usability they require on a daily basis. If users find the security mechanisms too cumbersome, they find ways to work around or compromise them. An example of this is using random passwords that are very strong but difficult to remember; users may write them down or looks for methods to circumvent the policy.

Layered Defense—Organizations should understand that any single security mechanism is generally insufficient. Security mechanisms (defenses) need to be layered so that compromise of a single security mechanism is insufficient to compromise a host or network. There is no “magic bullet” for information system security.

Compromise Recording—When systems and networks are compromised, records or logs of that compromise should be created. This information can assist in securing the network and host after the compromise and assist in identifying the methods and exploits used by the attacker. This information can be used to better secure the host or network in the future. In addition, this can assist organizations in identifying and prosecuting attackers.

A Practitioner's Guide to Software Testing

Page 90 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Summary Comparisons of Network testing Techniques

NETWORK SCANNING: Strength:

• Fast (as compared to vulnerability scanners or penetration testing) • Efficiently scans hosts, depending on number of hosts in network • Many excellent freeware tools available • Highly automated (for scanning component) • Low cost • Does

Weaknesses:

• Does not directly identify known vulnerabilities (although will identify commonly use Trojan ports [e.g., 31337,12345, etc]) • Generally used as a prelude to penetration testing not as final test • Requires significant expertise to interpret results

VULNERABILITY SCANNING: Strength:

• Can be fairly fast depending on number of hosts scanned • Some freeware tools available • Highly automated (for scanning) • Identifies known vulnerabilities • Often provides advice on mitigating discovered vulnerabilities • High cost (commercial scanners) to low (freeware scanners) • Easy to run on a regular basis

Weaknesses:

• Has high false positive rate • Generates large amount of traffic aimed at a specific host (which can cause the host to crash or lead to a temporary denial of service) • Not stealthy (e.g., easily detected by IDS, firewall and even end-users [although this may be useful in testing the response of staff and altering mechanisms]) • Can be dangerous in the hands of a novice (particularly DoS attacks) • Often misses latest vulnerabilities • Identifies only surface vulnerabilities

A Practitioner's Guide to Software Testing

Page 91 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

PENETRATION TESTING:

Strength:

• Tests network using the methodologies and tool that attacker’s employ • Verifies vulnerabilities • Goes beyond surface vulnerabilities and demonstrates how these vulnerabilities can be

exploited iteratively to gain greater access • Demonstrates that vulnerabilities are not purely theoretical • Can provide the realism and evidence needed to address security issues • Social engineering allows for testing of procedures and the human element network security

Weaknesses:

• Requires great expertise • Very labor intensive • Slow, target hosts may take hours/days to “crack” • Due to time required not all hosts on medium or large sized networks will be tested individually • Dangerous when conducted by inexperienced testers • Certain tools and techniques may be banned or controlled by agency regulations (e.g., network sniffers, password crackers, etc.) • Expensive • Can be organizationally disruptive

PASSWORD CRACKING:

Strength:

• Quickly identifies weak passwords • Provides clear demonstration of password strength or weakness • Easily implemented • Low cost

Weaknesses:

• Potential for abuse • Certain organizations restrict use

LOG REVIEWS

Strength:

• Provides excellent information • Only data source that provides historical information.

Weaknesses: • Cumbersome to manually review • Automated tools not perfect can filter out important information

A Practitioner's Guide to Software Testing

Page 92 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

FILE INTEGRITY CHECKERS

Strength:

• Reliable method of determining whether a host has been compromised • Highly automated • Low cost

Weaknesses:

• Does not detect any compromise prior to installation • Checksums need to be updated when system is updated • Checksums need to be protected (e.g., read only CD-Rom) because they provide no protection if they can be modified by an attacker

VIRUS DETECTORS

Strength:

• Excellent at preventing and removing viruses • Low/Medium cost

Weaknesses:

• Require constant updates to be effective • Some false positive issues • Ability to react to new, fast replicating viruses is often limited

WAR DIALING

Strength:

• Effective way to identify unauthorized modems

Weaknesses:

• Legal and regulatory issues especially if using public switched network • Slow

WAR DRIVING

Strength:

• Effective way to identify unauthorized wireless access points

A Practitioner's Guide to Software Testing

Page 93 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

AUTOMATE TESTING

What is "Automated Testing"?

Simply put, what is meant by "Automated Testing" is automating the manual testing process currently in use. This requires that a formalized "manual testing process" currently exists in your company or organization. Minimally, such a process includes:

• Detailed test cases, including predictable "expected results", which have been developed from Business Functional Specifications and Design documentation

• A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases are able to be repeated each time there are modifications made to the application

If your current testing process does not include the above points, you are never going to be able to make any effective use of an automated test tool. So if your "testing methodology" just involves turning the software release over to a "testing group" comprised of "users" or "subject matter experts" who bang on their keyboards in some ad hoc fashion or another, then you should not concern yourself with testing automation. There is no real point in trying to automate something that does not exist. You must first establish an effective testing process.

The real use and purpose of automated test tools is to automate regression testing. This means that you must have or must develop a database of detailed test cases that are repeatable, and this suite of tests is run every time there is a change to the application to ensure that the change does not produce unintended consequences.

An "automated test script" is a program. Automated script development, to be effective, must be subject to the same rules and standards that are applied to software development. Making effective use of any automated test tool requires at least one trained, technical person – in other words, a programmer.

Automation is used to replace or supplement manual testing with a suite of test programs. Benefits to

product developers include increased software quality, improved time to market, repeatable test procedures, and reduced testing costs.

REDUCED TESTING TIME. A typical automated test suite will run in less than 24 hours, without any

human intervention required. For a sophisticated product, manual testing may require dozens of staff months to perform the same testing.

CONSISTENT TEST PROCEDURES. With a complex testing process manual testing often yields

inconsistent coverage and results depending on the staff and schedule employed. An automated test suite ensures the same scope and process is used repeatable each time testing is performed.

REDUCED QA COSTS. Automated testing has an upfront cost to develop, but over the lifetime of a

product it will offer substantial net savings. An average automated test suite development is 3-5 times the cost of a complete manual test cycle. Over multiple product releases with multiple cycles per release, this cost is quickly recouped.

IMPROVED TESTING PRODUCTIVITY. With its much shorter execution time an automated test

suite can be run multiple times over the course of a product development cycle. By testing earlier and more often bugs are detected and corrected earlier and at much reduced expense.

IMPROVED PRODUCT QUALITY. The sum of improved test procedures and testing productivity is a

substantial improvement in product quality. Automated testing detects functional and performance issues more efficiently, allowing test staff to focus on quality in areas such as documentation, installation, hardware compatibility, etc.

A Practitioner's Guide to Software Testing

Page 94 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Who Should Automate Tests? I have been a GUI test automation specialist for a couple different testing groups in the past four years

and currently head up a small team of automation specialists. I think it makes a lot of sense to have someone focus on these aspects of the testing. These are some of my thoughts on how to select someone to do this task. A test automator needs to have good testing and development skills. She needs to understand testing requirements and the situations testers face. Automating tests should not be an opportunity to impose a particular testing methodology on testers. They will find fault with it and refuse to use it. Rather it needs to build from existing testing methodologies.

If the test automator has background as a tester, you will need to ask if she will show the necessary discipline. Sometimes testers who really want to be programmers seize on test automation as a way for them to develop these skills. It is important that they have good judgment and not get carried away with the programming. Be wary if they are hoping to automate all of their testing. They need to be focusing on the big wins. They may focus on improving the automation when it is actually good enough for the job.

A test automator needs to know how to develop software. She needs to be particularly aware of issues such as maintenance and reliability. Making the system easy to update with changes to the product under test should be the priority.

If her background is as a developer, you will need to ask if she has understanding and respect for the testing process.

Sometimes you can find independent contractors who have well-matched backgrounds. With them, you will have to ask who will be maintaining the testing system after they have left. Maintenance will be a critical challenge. If you have access to good training in test automation, take advantage of it. Developments in test automation are being made very quickly. It's often cheaper to pay to people learn from someone else's mistakes than to have to make them make the mistake again themselves. Don't assign automation to rejects from programming or testing. Unless test automation is done by someone who is motivated and working closely with the rest of the development group, it will not succeed.

Choosing What To Automate A colleague once asked me if I thought it was theoretically possible to automate 100% of testing. This

question threw me. Theoretically, testing should never be necessary in the first place. The programs should be coded correctly first off! But we're not really talking about the theoretical. Testing is the art of the pragmatic. Good software testing requires good judgment.

Look at where you are spending a lot of time testing manually. This is what you should consider automating. If you are a conscientious tester, you are often aware of things you wished you only had time to test. Don't focus your automation efforts on these tasks that may otherwise go untested. It's usually a mistake. For one thing, you only want to code automation after you have a determined testing procedure. If you've run through your manual tests a couple times, you probably have a solid sense of how to proceed with the testing. You don't want to automate tests you haven't run much manually and then realize that there is a more-effective procedure. This may mean re-working your automation or just giving up on it. Another problem with automating tests you haven't found the time to run manually is that you're not likely to find the time to maintain them later. Test automation always breaks down at some point. Make sure the tests are important enough that you will be devoting the time to maintain them when the opportunity arises. First, get your testing procedures and practices standardized and effective. Then, look at how you can use automation to improve your productivity.

Testing can be boring. Sometimes people want to automate even casual tests that will only be executed a couple times. The thought is that automation may allow them to avoid the tedium. But there are snags. Complications arise. Test automation will itself have to be debugged. This may often take as much time as it would to just execute the tests manually in the first place. I use a rule of thumb that says test automation will take at least ten times the time it would take to execute the tests manually. Don't fall into the temptation to automate simply to make your job more exciting.

A Practitioner's Guide to Software Testing

Page 95 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Many testers really want to be programmers. Test automation may provide these people with an opportunity to develop their programming skills. If you find yourself in this circumstance, try to stay clear on your goals and how to sensibly use your programming skills to accelerate the testing. Don't let your attention get caught into the programming for its own sake. Don't try to automate all of your testing. Don't be a perfectionist. If your program does the testing, great. It can have a couple bugs. You're not creating a commercial product; if there are fatal bugs, you'll be around to fix them. Later I'll discuss some advice regarding the parts of test automation that must be reliable. If you are intent to become a programmer, hone your testing skills while you seek a programming position. They will be extremely valuable when you get programming work. Performance is an area where wasted effort can be applied to test automation. Performance improvements generally depend on assumptions about the product under test. But since maintainability is usually fostered by making as few assumptions about how the product works as is practical, improving performance often reduces maintainability. Don't do this. Make maintainability a priority. I have seen performance improvements to test automation that had to be ripped out when the product changed. They made it harder to maintain the testsuite and didn't last long anyway. The best way to allow more tests to be run in the day is to design your testing system to allow for unattended testing. I have more say about this later.

Test automation efforts have failed by trying to do too much. You are better off trying to get first results quickly. This has several advantages. It will allow you to quickly identify any testability issues. These may require cooperation from developers and may take some time to resolve. The sooner they are identified, the better off you are. You may also wish to just automate the most laborious part of the testing, leaving such items as setup and verification to be done manually. Starting small and building from there will allow you to validate your testing strategy. Getting early feedback from testers, programmers and build engineers will allow you to grow your testsuite into something that will benefit many people. Demonstrate to your programmers the assumptions you are depending on.

If you've been asked to specialize on the test automation, you may find a tendency to try to get a big chunk all worked out before handing it off. Fight this tendency. The sooner you hand off bits to others that they can use in their daily testing, the better off you all will be. Test automation may require testers to rethink how they are doing their job. The sooner they start this, the better. Late in a testing cycle, they may find themselves putting all their energy into keeping up with the product changes and the bug list. They may not put much energy into learning how to use the automation and you may find yourself frustrated when it goes under used and unappreciated.

First, try to get one test to run. Then build up your test suite. Realize that the people using test automation don't care much code you've written to support the testing. All they will notice is how many tests are automated and how reliable the test suite is. After you have a small suite, you can work on generalizing code and putting together a more general testing system.

Build acceptance tests are the tests that are run before software is moved into testing. These should be able to be run quickly, often in less than an hour. These tests are excellent candidates for automation. These tests are run frequently. These tests should try to cover as many different functions of the product as possible. The aim is breadth, not depth.

It's worth the investment to make your acceptance tests easy to setup and run. Once the acceptance test suite is put in place, smart programmers will want to run it on their code before checking it in. Finding their own bugs will help them avoid embarrassment and lot of rushing around. As a tester, you will want to do all that you can to make this happen.

In my experience, making good decisions about what to automate can be critical to successful test automation. There are often many simple things that can give big paybacks. Find them.

A Practitioner's Guide to Software Testing

Page 96 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Cost-Effective Automated Testing Automated testing is expensive (contrary to what test tool vendors would have you believe). It does not

replace the need for manual testing or enable you to "down-size" your testing department. Automated testing is an addition to your testing process. According to Cem Kaner, in his paper entitled "Improving the Maintainability of Automated Test Suites" (www.kaner.com), it can take between 3 to 10 times as long (or longer) to develop, verify, and document an automated test case than to create and execute a manual test case. This is especially true if you elect to use the "record/playback" feature (contained in most test tools) as your primary automated testing methodology. Record/Playback is the least cost-effective method of automating test cases.

Automated testing can be made to be cost-effective, however, if some common sense is applied to the process: • Choose a test tool that best fits the testing requirements of your organization or company. An "Automated

Testing Handbook" is available from the Software Testing Institute (www.ondaweb.com/sti) which covers all of the major considerations involved in choosing the right test tool for your purposes.

• Realize that it doesn’t make sense to automate some tests. Overly complex tests are often more trouble than they are worth to automate. Concentrate on automating the majority of your tests, which are probably fairly straightforward. Leave the overly complex tests for manual testing.

• Only automate tests that are going to be repeated. One-time tests are not worth automating. • Avoid using "Record/Playback" as a method of automating testing. This method is fraught with problems,

and is the most costly (time consuming) of all methods over the long term. The record/playback feature of the test tool is useful for determining how the tool is trying to process or interact with the application under test, and can give you some ideas about how to develop your test scripts, but beyond that, its usefulness ends quickly.

• Adopt a data-driven automated testing methodology. This allows you to develop automated test scripts that are more "generic", requiring only that the input and expected results be updated. There are 2 data-driven methodologies that are useful. I will discuss both of these in detail in this paper. The Record/Playback Myth Every automated test tool vendor will tell you that their tool is "easy to use" and that your non-technical

user-type testers can easily automate all of their tests by simply recording their actions, and then playing back the recorded scripts. This one statement alone is probably the most responsible for the majority of automated test tool software that is gathering dust on shelves in companies around the world. I would just love to see one of these salespeople try it themselves in a real-world scenario. Here’s why it doesn’t work:

• The scripts resulting from this method contain hard-coded values which must change if anything at all changes in the application.

• The costs associated with maintaining such scripts are astronomical, and unacceptable. • These scripts are not reliable, even if the application has not changed, and often fail on replay (pop-up

windows, messages, and other things can happen that did not happen when the test was recorded). • If the tester makes an error entering data, etc., the test must be re-recorded. • If the application changes, the test must be re-recorded. • All that is being tested are things that already work. Areas that have errors are encountered in the

recording process (which is manual testing, after all). These bugs are reported, but a script cannot be recorded until the software is corrected. So what are you testing?

After about 2 to 3 months of this nonsense, the tool gets put on the shelf or buried in a desk drawer, and the

testers get back to manual testing. The tool vendor couldn’t care less – they are in the business of selling test tools, not testing software.

A Practitioner's Guide to Software Testing

Page 97 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Software Test Automation and the Product Life Cycle

A product's stages of development are referred to as the product life cycle (PLC). There is considerable work involved in getting a product through its PLC. Software testing at many companies has matured as lessons have been learned about the most effective test methodologies. Still, there is a great difference of opinion about the implementation and effectiveness of automated software testing and how it relates to the PLC.

Computers have taken over many functions in our society that were once "manual" operations. Factories use computers to control manufacturing equipment and have cut costs enormously. Electronics manufacturing use computers to test everything from microelectronics to circuit card assemblies. Since automation has been so successful in so many areas, does it make sense that a software program should be used to test another software program? This is referred to as "automated software testing" for the remainder of this article.

Software testing using an automatic test program will generally avoid the errors that humans make when they get tired after multiple repetitions. The test program won't skip any tests by mistake. The test program can also record the results of the test accurately. The results can be automatically fed into a database that may provide useful statistics on how well the software development process is going. On the other hand, software that is tested manually will be tested with a randomness that helps find bugs in more varied situations. Since a software program usually won't vary each time it is run, it may not find some bugs that manual testing will. Automated software testing is never a complete substitute for manual testing.

There has been plenty of debate about the usefulness of automatic software testing. Some companies are quite satisfied with the developer testing his/her own work. Testing your own work is generally thought of as risky since you'll be likely to overlook bugs that someone not so close to the code (and not so emotionally attached to it) will see easily. As soon as the developer says it's done they ship it. The other extreme is the company that has its own automatic software test group as well as a group that tests the software manually. Just because we have computers does that mean that it is cost effective to write tests to test software and then spend time and resources to maintain them? The answer is both yes and no. When properly implemented, automated software test can save a lot of time, time that will be needed as the software approaches shipping.

This is where the PLC comes in. How effectively you make use of the PLC will often be dependent on your programming resources and the length of the PLC. Companies large and small struggle with software testing and the PLC. Hopefully, this discussion of the PLC should help you determine when to use automation and when manual testing is preferred. This should help you answer the questions: "Why should I automate my software testing?" "How can I tell if automation is right for my product?" "When is the best time to develop my test software?".

The Product Life Cycle

As we discuss the use of automated and manual testing, we need to understand what happens in each phase of the product life cycle. The PLC is made up of six major stages, the Design Phase, the Code Complete Phase, the Alpha Phase, the Beta Phase, the Zero Defect Build Phase, and the Green Master Phase. You can think of the PLC as a timeline showing the development of a software product. These are the major milestones that make up the Product Life Cycle. Products that follow these guidelines for implementation of the PLC will have a much better chance of making it to market on time.

The implementation of the PLC varies widely from company to company. You can use this as a guide for future reference to assist you in your automation efforts. Your implementation will vary from the ideal PLC that is

A Practitioner's Guide to Software Testing

Page 98 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

discussed here, but your software's success may depend on how well you've implemented its PLC. If your PLC is to include automated testing you should pay attention to which automated tasks are performed during each phase.

For each phase we'll describe it, define its special importance and discuss how to incorporate software automation into your project. Most other discussions of the PLC don't include the lessons learned about test automation. This should be your "one-stop" guide to help you know how and when automation fits into the PLC.

Design Phase

What is the Design Phase? The design phase begins with an idea. Product managers, QA, and Development get together at this point to determine what will be included in the product. Planning is the essence of the design phase. Begin with the end in mind and with a functional specification. Write down all of your plans. What will your product do? What customer problems does it solve?

Incorrectly, some companies don't include Quality Assurance (QA) in the design phase. It is very important that QA be involved as early as possible. While developers are writing code, QA will be writing tests. Even though QA won't really have the total picture of the product, they will want to get as much of a jump on things as possible. Remember, that the primary purpose of QA is to report status. It is important to understand the product's status even early in the Design Phase.

Why is the Design Phase important? If you think you're too short on time to write up a functional description of your product, then consider the extra time involved to add new features later on. Adding features later (especially once the Code Complete Phase has been reached) is known as "feature creep". Feature creep can be a very costly haphazard way to develop your product, and may materially interfere with delivery of the software.

Automation activity during the Design Phase. As soon as the functional specification is written, create all test cases so that they can be run manually. Yes, that's right, manually! These manual tests are step-by-step "pseudo" code that would allow anyone to run the test. The benefits of this approach are:

1. Your test cases can be created BEFORE ever seeing the software's user interface (UI). It is too soon to automate tests at this point in time, but you can create manual tests with only a small risk of changes that will occur . This is a point of great frustration for those who have tried to implement automated test scripts too early. Just as soon as the test script is written changes in the UI are bound to be introduced and all the work on the script is found to be for nothing.

2. When (not if) the code is modified, you will always have manual procedures that can be adapted to the change more quickly than an automated test script. This is a great way to guarantee that you will at least have tests you can perform even if automation turns out to not be feasible. (Note: one of the objections to software test automation is that the tests must be continually updated to reflect changes in the software. These justifiable objections usually stem from the fact that automation was started too early.)

3. Test methods can be thought out much more completely because you don't have to be concerned with the programming language of the automation tool. The learning curve of most automation tools may get in the way of writing meaningful tests.

If you have the resources available, have them begin training on the test tools that will be used. Some members of the team should start writing library routines that can be used by all the test engineers when the start their test coding. Some of these routines will consist of data collection/result reporting tools and other common functions.

A Practitioner's Guide to Software Testing

Page 99 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

After the manual test cases have been created decide with your manager which test cases should be automated. Use the Automation Checklist found later in this article to assist you in deciding what tests to automate. If you have enough manpower you may want to have an test plan team and an automation team. The test plan team would develop tests manually and the automation team would decide which of the manual tests should be run automatically (following the guidelines of the Automation Checklist later in this article). The automation team would be responsible for assuring that the test can be successfully and cost effectively automated.

Sometime during the design phase, as soon as the design is firm enough, you'll select the automation tools that you will need. You don't have to decide exactly which tests need to be automated yet, but you should have an idea of the kinds of tests that will be performed and the necessary capabilities of the tools. That determination is easier as the software gets closer to the code complete phase. Your budget and available resources will begin to come into play here.

For just a moment, let's discuss some of the considerations you should use in selecting the test tools you need. You'll also want to keep in mind the Automation checklist later in this column. It will help you determine if a test should be automated. There are a few good testing tools including Apple Computer's Virtual User (VU) (See the September, 1996 article "Software Testing With Virtual User", by Jeremy Vineyard) and Segue's QA Partner (Segue is pronounced "Seg-way").

Is there a lot of user interface (UI) to test? Software with a lot of UI is well suited for automated black box testing. However, some important considerations are in order here. You need to get involved with development early to make sure that the UI can be "seen" by the automation tool. For example: I've seen programs in which a Virtual User 'match' task (note: a task is what a command is called in Virtual User) couldn't find the text in a text edit field. In those cases, this occurred because the program didn't use standard Macintosh calls, but rather was based on custom libraries that provided UI features their own way.

Will the automated test environment effect the performance or operation of the system being tested? When you're trying to test the latest system software, you don't want the testing system changing the conditions of the test.

Is the speed that the tests run a consideration? If you're trying to measure the performance of a system you'll want to make sure that the conditions are as much like the "real world" as possible. You should consider the amount of network traffic that is present while you're running your tests. Also, the speed of your host processor can effect the time it takes your tests to run. You should schedule your tests so that you minimize the possibility of interfering with someone else on your network. Either isolate your network from others or warn them that you will be testing and that there is a possibility that their network activity may slow down.

What kinds of tests will be performed? The lower the level the testing is, the more likely white box testing should be used. A good example of this would be if you have a function that does a calculation based on specific inputs. A quick C program that calls the function would be much faster and could be written to check all the possible limits of the function. A tool like VU would only be able to access the function through the UI and would not be able to approach the amount of coverage that a C program could do in this situation.

Is there a library of common functions available or will you have to write them yourself? It will save a lot of time if you don't have to develop libraries yourself. No matter how extensive the command set, efficient use of library functions will be essential. Libraries that others have written may be useful; you can modify them to meet your own needs.

A Practitioner's Guide to Software Testing

Page 100 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

What will be the learning curve for a script programmer? The time it takes will depend greatly on the kind of testing you have to do and the experience of the programmer. If you've done your homework on the available test tools, you should know what to expect. Some companies even offer training courses (for a price) in their test software.

Can your automation tool automatically record actions for you? Some tools do this, but don't expect to rely on this too heavily. Tools that I've seen that do this end up creating code that has hard coded strings and tend to be organized in a sequential manner rather than by calling procedures. These recorded scripts are harder to maintain and reuse later. If you plan to use the same script for international testing, modifying the script will mean much more work. If you want to record actions, I recommend that you do it only to create short functions and you should edit the script after recording to remove the unwanted hard coded strings, etc.

Can you run AppleScript scripts from the tool's script language? This is a very useful feature since AppleScript scripts are so easy to write and can add additional functionality to your test tool.

In preparing this article, I encountered several "pearls" worth relating:

"Success in test automation requires careful planning and design work, and it's not a universal solution. ... automated testing should not be considered a replacement for hand testing, but rather as an enhancement." (Software Testing with Visual Test 4.0, forward by James Bach, pg. vii)

"The quality assurance engineers then come on the scene... and begin designing their overall test plan for the features of the product...."

"The goal is to have the test plan and checklists laid out and ready to be manually stepped through by the test engineers when each feature is completed by the programmers. Each item on a checklist is considered a scenario and related scenarios are grouped into test cases." (

Code Complete Phase

What is the Code Complete Phase? At this major milestone the code has been completed. The code has been written, but not necessarily yet debugged. (Development may try to claim they are at code complete even though they may still have major coding still left to do. Go ahead and let them declare the code complete, but don't let them get to Alpha until the code really is completely written.)

Why is the Code Complete Phase important? Sooner or later you'll have to get to a point where new code is no longer being written, and the major effort is in fixing bugs. Development will be relieved to get to this point as now they don't have to be as concerned with the initial coding and can concentrate on refining the existing product. (This is why they will try to claim they are at code complete even when they are not).

Automation activity during the Code Complete Phase Although the UI may still change, QA can begin writing Automatic test cases. The tests that should be written at this point are breadth tests that tell the status of the overall software product. Don't write tests which stress the product until you get close to Alpha. The product will probably break very easily. Some acceptance (or "smoke") tests should also be created to give a quick evaluation of the status of a particular build. Before reaching the Alpha phase there should also be tests written to test the Installer, boundary (or stress tests), compatibility (hardware and OS), performance, and interoperability.

A Practitioner's Guide to Software Testing

Page 101 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Somewhere just before code complete, you will need to decide which tests should be made into automatic tests and what test tools to use. Use the following checklist to help you determine which tests should be automated:

Automation Checklist

If you answer yes to any of these questions, then your test should be seriously considered for automation.

Can the test sequence of actions be defined?

Is it useful to repeat the sequence of actions many times? Examples of this would be Acceptance tests, Compatibility tests, Performance tests, and regression tests.

Is it possible to automate the sequence of actions? This may determine that automation is not suitable for this sequence of actions.

Is it possible to "semi-automate" a test? Automating portions of a test can speed up test execution time.

Is the behavior of the software under test the same with automation as without? This is an important concern for performance testing.

Are you testing non-UI aspects of the program? Almost all non-UI functions can and should be automated tests.

Do you need to run the same tests on multiple hardware configurations? Run ad hoc tests (Note: Ideally every bug should have an associated test case. Ad hoc tests are best done manually. You should try to imagine yourself in real world situations and use your software as your customer would. As bugs are found during ad hoc testing, new test cases should be created so that they can be reproduced easily and so that regression tests can be performed when you get to the Zero Bug Build phase.) An ad hoc test is a test that is performed manually where the tester attempts to simulate real world use of the software product. It is when running ad hoc testing that the most bugs will be found. It should be stressed that automation cannot ever be a substitute for manual testing.

Alpha Phase

What is the Alpha Phase? Alpha marks the point in time when Development and QA consider the product stable and completed. The Alpha Phase is your last chance to find and fix any remaining problems in the software. The software will go from basically functional to a finely tuned product during this phase.

Why is the Alpha Phase important? Alpha marks a great accomplishment in the development cycle. The code is stable and the most major bugs have been found and fixed.

Automation Activity During The Alpha Phase

At this point you have done the tasks that need to be done in order to reach Alpha. That is, you have all your compatibility, interoperability, and performance tests completed and automated as far as possible. During Alpha you'll be running breadth tests every build. Also you'll run the compatibility, interoperability, and performance tests at least once before reaching the next milestone (beta). After the breadth tests are run each build, you'll want to do ad hoc testing as much as possible. As above, every bug should be associated with a test case to reproduce the problem.

A Practitioner's Guide to Software Testing

Page 102 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Beta Phase

What is the Beta Phase? The product is considered "mostly" bug free at this point. This means that all major bugs have been found. There should only be a few non essential bugs left to fix. These should be bugs that the user will find annoying or bugs that pose relatively no risk to fix. If any major bugs are found at this point, there will almost definitely be a slip in the shipping schedule.

Automation activity during the Beta Phase

There's no more time left to develop new tests. You'll run all of your acceptance tests as quickly as possible and spend the remaining time on ad hoc testing. You'll also run compatibility, performance, interoperability and installer tests once during the beta phase.

Remember that as you do ad hoc testing every bug should have an associated test case. As bugs are found during ad hoc testing, new test cases should be created so that they can be reproduced easily and so that regression tests can be performed when we get to the Zero Bug Build phase.

Zero Defect Build Phase

What is the Zero Defect Build Phase? This is a period of stability where no new serious defects are discovered. The product is very stable now and nearly ready to ship.

Automation Activity During The Zero Defect Build Phase

Run regression tests. Regression testing means running through your fixed defects again and verify that they are still fixed. Planning for regression testing early will save a lot of time during this phase and the Green Master phase.

Green Master

What is the Green Master Phase? Green Master is sometimes referred to as the Golden Master or the final candidate. The product goes through a final checkout before it is shipped (sent to manufacturing).

Automation activity during the Green Master Phase

After running general acceptance tests, run regression tests. You should run through your fixed defects once again to verify that they are still fixed. Planning for regression testing early will save a lot of time during this phase.

Understanding the PLC will help you select your automation tools

Perhaps this review of the PLC is 'old hat' for you. In my experience, reviewing the process usually helps to focus the project. Since software test has been evolving over the years, and many companies have been struggling with how to implement it, we can always use all the help and advice that we can get. There are some good sources of information that will help. Take a look at the Software Testing Laboratories web site at URL http://www.stlabs.com/. There are a few relevant articles in James Bach's archives at http://www.stlabs.com/LABNOTE.HTM. The Software Testing Labs is mostly geared toward MS Visual Test, but the QA principles involved are the same for any platform, Macintosh included.

A Practitioner's Guide to Software Testing

Page 103 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

This review of the PLC represents an ideal situation where Development and QA both buy into this same way of doing the software business. It will also require some commitment on the part of Management to assure that each phase is supported and accepted. Once all of those conditions, everyone involved needs to be focused on making sure that no short cuts are taken around the agreed upon PLC. Hopefully this discussion has been helpful to you in automating your software tests.

Automate Testing of Graphical User Interfaces Capture-and-Replay Tools

All CR-Tools currently commercially available [im2] are similar in function and operation:

Capture mode: During a test session, the Capture-and-Replay Tool (CR-Tool) captures all manual user-interactions on the test object. All CR-Tools are object-oriented, i.e. they recognize each selected GUI element which is selected (such as button, radio-box, toolbar, etc.) and capture all object characteristics (name, color, label, value), and not just the X/Y-coordinates of the mouse-click. Programming: The captured test steps are stored by the CR-tool in a test script equivalent to C or Basic. With all the functions of a programming language in the test script (case differentiation, loops, subroutines), even complex test processes can be implemented. Checkpoints: In order to determine if the program being tested (SuT: Software under Test) is functioning correctly or if there have been any errors, the tester (during the test capture or in the script editing) can insert additional checkpoints in the test script. By this means the layout-relevant characteristics of windows-objects (color, position, size, etc.) can be verified along with functional characteristics of the SuT (a mask value, contents of a message box, etc.). Replay mode: Once captured, tests can be replayed and thus in principal are repeatable at any time. The aforementioned object-oriented test capture permits GUI elements to be re-recognized when the test is repeated, even if the GUI has meanwhile been modified by a change in the software. If the test object behaves differently during a repeat test or if checkpoints are violated, then the test fails. The CR-Tool records this as an error in the test object. Tool vendors promote CR-Tools as a very fast and easy method of automating GUI testing. But in reality there are lots of traps or pitfalls which can impair the effectiveness of CR-tool based testing. To understand where and why, we have to take a closer look at GUI's and GUI testing.

Testing of the GUI

In a GUI-based software system, all functions are represented in the GUI by visible "interaction points" (IP), and the activation of intended functions is achieved by directly pointing to their visible representations [Rau]. As a GUI tester you first must be able to recognize all these IP's (what is no problem for the human tester). Then you have to perform a set of test cases to test the functionality of the software underlying the GUI ( does the system really store the entered data on the database server? ). These tests are specific for each product being tested, and this functional testing is an essential component of GUI testing. You also always have to verify that the visible representations of functions are consistent and make sense to the end-user ( why is the save-button not labeled "save"? ). These "style guide" tests are usually specific for a product line, for all the software products of a specific company, or for software running under a specific windows-system. Of course, in practice there is no

A Practitioner's Guide to Software Testing

Page 104 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

strict borderline between these two types of tests ( when I change that data field's value and close the mask - why doesn't the system automatically store the altered data? ). Now, where are the traps?

Traps when Accessing GUI-Objects

As mentioned above, CR-Tools are supposed to recognize IP's or GUI objects with object-oriented methodology and to also re-recognize them if the GUI layout has been changed. However, you must make sure this is also true for your tool. If your test tool is able to recognize GUI objects created by C++ development systems, it might not recognize objects within Oracle-applications or, for example, OCX-controls. In this case you would have to look for an additional interface kit offered by your test tool vendor. Since CR-Tools recognize GUI elements by their ID, individualizing the specific element within its context, or via a combination of attribute values that uniquely identify the element, they are sensitive to changes of these values. However, software development tools (e.g. Microsoft's Visual C++) sometimes change ID's of GUI-objects without even notifying the developers (the idea is to release the developer from responsibility for assigning individual ID's to GUI elements). In this case, whenever a product is re-tested during a new release, the CR-Tool must be manually re-taught about all changed GUI objects. A similar problem occurs if GUI objects are eliminated or redesigned due to functional changes in the new product release.

Traps when Testing Functionality

If the CR-Tool captures all your mouse-clicks, it´s likely that no real test is recorded, such as: press the ´save´-button and wait until ´saved´ occurs in a message box . If you really want to test if the data has been saved on the server, you have to look for the specific data on the server itself (a checkpoint normally not performed by a GUI-testing tool), or you must reload the data record to check if it's the same as the one stored before. Capturing the sequence enter data, store it, reload it, check if values of input/output are the same is therefore a better approach, but there are even more potential traps: Can you be sure that the data reread was not already stored on the server by your tester colleague? (You have to be sure of this fact. Otherwise your test case makes no sense at all). If you are sure, because - during testing -- the system hasn't popped up an overwrite data yes/no message, then you have a new problem: Your test case is not reusable and must be rewritten into something like: enter data, store it, if ´overwrite data yes/no´- message appears press ´yes´, reload it, check if values of input/output are the same. Test cases depend on the SuT's history and current state; this problem is well-known in test automation. However, in GUI test automation this problem will occur with nearly every test case:

• buttons are enabled/disabled depending on field-values,

• data fields change color to gray and become read-only,

• toolbars and menu entries change during operation,

• the bitmap you stored as a reference bitmap contains system clock output,

• sometimes during testing a word-processor will be running in parallel eating up windows resources, and sometimes not,

• an email message popping up captures the focus and stops your test-run,

• the application iconifies, and other surprises.

A Practitioner's Guide to Software Testing

Page 105 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Traps of Style Guide Testing

In order to give software a consistent Look & Feel, its designers try to establish appropriate layout-rules in a so-called Style Guide. The list below gives some examples:

• ´OK´ and ´Cancel´ are always located at the bottom or on the right, but never on top or on the left.

• All fields must be accessed via tabs in the same order. If fields are grouped together, then tabs must access the fields within the group.

• All fields must be perfectly aligned.

• Texts must be legible and displayed with sufficient contrast.

• A context-sensitive help function must be accessible for each button. These on-line help instructions must be useful and understandable.

The Style Guide must not only be followed for one mask, but also for the complete application. An automated Style Guide Test would therefore be extremely beneficial. Testing costs would be lowered considerably because it makes no difference whether the CR-Tool tests one or all masks. Furthermore, these checks could be run simultaneously ("free-of-charge") with the necessary functional tests. Considering the multitude of masks, the probability is very high that a product contains undiscovered style guide violations. Automated testing would thus certainly lead to a quality improvement for tested products. The value of using a CR-Tool here should be self-evident! Nevertheless, typical Style Guide criteria are difficult to formulate and quantify, as shown by in the examples above. Test automation without formalization of the Style Guide is doomed to failure from the beginning.

Making GUI Test-Automation Work

As discussed above, the capture mode of CR-Tools can only help to achieve an initial prototype

implementation of a test case. Most scripts captured will need additional script programming to obtain useful checkpoints and will require maintenance to make and keep them reusable. Therefore, GUI test automation is a programming task. On the other hand, it is a software specification task, too. The test cases must be defined before capturing, and their specification must be much more detailed than for manual testing. Any organization that is planning to automate testing must already have an established testing process in place; otherwise test development is doomed to failure. The diagram below shows a Testing Model of the imbus company, which specifies a structured process from test planning to test specification, test implementation and finally (automated) testing.

A Practitioner's Guide to Software Testing

Page 106 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Fig 1: The testing process

Each of these steps is primarily defined by templates of the documents that are generated as a result of the process step. Examples for such templates are illustrated in the following sections.

GUI Test Specification

As described above, GUI tests can be divided into product-specific functional tests and universally valid style guide tests . The functional tests can be subdivided into testing categories such as performance tests, which must be implemented for each tested product differently but can have a similar definition in the test specification. If this is taken into consideration when organizing the test specification, a reusable specification template can be obtained. Figure 2 shows a good basis for a GUI test specification:

A Practitioner's Guide to Software Testing

Page 107 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

A template like this lists the test cases to be performed in each GUI test. But what is inside a test case? Each test case definition should answer the following questions and therefore consist of the following parts:

GUI Test Implementation

After you have defined a method of specifying tests using templates like those shown above, then you are one step closer to automated GUI testing. However, additional rules about the test script implementation are necessary in order to guarantee proper documentation and adequate modularization of the scripts.

As a minimum requirement, each test script should consist of the following six sections:

If all tests are programmed in this manner, then test modules can be obtained which are relatively easy to combine into test sequences. This not only supports the reusability of tests when testing subsequent product releases, but also the reusability of the test programs by other testers, in different test conditions, and/or for testing other products.

A Practitioner's Guide to Software Testing

Page 108 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Building a Test Case Library The implementation of test scripts according to the rules described above necessitate relatively high

expenditures for test-programming as well as corresponding know-how in CR-Tool programming. In order to "conserve" this knowledge and make it available to less qualified test-tool users, we started to develop a Test Case Library which completes the GUI Test Specification Template at the implementation level.

This library contains prototypes for test scripts (to ensure script implementation according to the rules described above), extensions of the test script language, and, in particular, ready-to-run implementations of Style Guide test cases from our GUI test specification running under Windows 95 and Windows NT. With the increasing growth of this library, we expect once to cover the full set of Style Guide rules. Accordingly, Style Guide conformance of a software product can be checked by running the library's test scripts, and in the future a style guide written in plain text will no longer be needed. But additional work is still necessary here, especially tests checking the ergonomic aspects of a software's GUI have to be programmed. The following are examples of Style Guide checks already implemented:

In the long-term such a library will only be used if it is constantly maintained and updated: new or modified tests in SW-projects must be periodically checked for reusability. All reusable tests must be perfected and documented; company-wide access must be guaranteed to any new versions of the test-case library. Therefore, in parallel to the implementation of the test case library, an appropriate process for library maintenance should be established. The following figure illustrates the steps needed:

Fig. 6: Test Library Maintenance Steps

A Practitioner's Guide to Software Testing

Page 109 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Measurements of Expenditures

To determine how much more economical automated GUI-testing really is compared to manual testing, we have measured and compared the expenditures for both methods during our PIE [EU2]. The baseline project we chose for measuring this data was the development of an integrated PC software tool for radio base stations (for GSM radio communication). This application provides a graphical interface for commissioning, parameterization, hardware diagnostics, firmware downloading, equipment data base creation, and in-field and off-line diagnostics of multiple types of base stations, including a full-graphics editor for equipment definition. The application was developed using Microsoft Visual C++/Visual Studio and is approximately 100000 lines of C++ Code in size.

The table below (Figure 6) shows the measurement results:

V m := Expenditure for test specification. V a := Expenditure for test specification + implementation.

D m := Expenditure for single, manual test execution.

D a := Expenditure for test interpretation after automated testing. Time for test process not counted since executed without supervision via CR-Tool. V and D are given in hours of work.

E n := A a /A m = (V a + n*D a )/ (V m + n*D m ).

Figure 7: "Break-even" of GUI Test Automation

The question of main interest is: How often does a specific test have to be repeated before automated testing becomes cheaper than manual testing? In the above table this "break-even" point is represented by the N-factor, in accordance with the equation E N = A a /A m = 100%. The measurements undertaken within our experiments show that a break-even can already be attained by the 2nd regression test cycle (N total =2,03). This break-even, however, does have two prerequisites: the tests must run completely without human interaction (e.g. overnight test runs), and no further test script modifications are necessary to rerun the tests in later releases of the product. As already mentioned in this lecture, this is not easy to achieve.

If all you do is buy a CR-Tool and begin capturing, then your testing costs will increase to between 125% and 150% of manual testing costs (see E 1 in fig. 7). And there will be additional costs associated with each test run because of traps such as test script maintenance. On the other hand, if you establish a complete framework for

A Practitioner's Guide to Software Testing

Page 110 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

GUI test automation (where the CR-Tool is a cornerstone and not the complete solution), then a decrease of costing down to about 40% for a typical product test cycle (E 10 ) is realistic.

TEST AUTOMATION SUCCESS AND FAILURE

What is Failure?

• Wasted Time • Wasted Money • Inaccurate Results • Demoralized Team • Overall Reduced Productivity • Lost Opportunity

What is Success?

• Overall Cost Savings • Improved Testing • Shortened Software Development Cycle • Reliable Results • Process in Place for Future Success

Automate Testing Success Factors:

Goals:

• Stated goal: “Save manual testers time and improve testing coverage.” • Unstated goal: “Reduce test cycle time.” • Goals specifically designed to be measurable.

Readiness to Automate

• Written test documentation. Each test case numbered individually.

• Test results tracked on spreadsheets that referenced the test case number. • The test group as a whole had a much better understanding of how to test the product.

Role of the Automated Testing Team

• Service organization focused on building tools.

• Automators understood that automation cannot replace manual testing. • Manual testers more involved in the process and therefore less threatened.

Automated Test System Architecture • Tool specifically designed to support creation of automation systems.

• Tool supported & encouraged creating a reusable library (“infrastructure”). • Logical layer vastly improved portability & maintainability of scripts.

A Practitioner's Guide to Software Testing

Page 111 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Method of Script Creation • Primarily data driven; record & playback used a learning tool only.

• Used advanced features the automation tool to support automated test data generation.

Method of Verification

• Logical window existence functions to verify window appeared. • Logical comparison between expected fields in window and actual fields. • Test data verification

Automation Programming Practices

• Automation standards focused on what constitutes good code • Both formal & informal code reviews on a regular basis. • Commercial source control system used.

TEST AUTOMATION MYTHS AND FACTS

A number of articles and books are written on different aspects of Software Test Automation. “Test Automation Snake Oil” by, James Bach is an excellent article on some of the myths of automation. I like to discuss some of these myths and will try to point out the facts about these myths. I also like to discuss some of my observations and hopefully point out possible solutions. These are based on my experience with a number of automation projects I was involved with.

Find more bugs: Some QA managers think that by doing automation they should be able to find more bugs. It’s a myth.

Let’s think about it for a minute. The process of automation involves a set of written test cases. In most places the test cases are written by test engineers who are familiar with the application they are testing. The test cases are then given to the automation engineers. In most cases the automation engineers are not very familiar with the test cases they are automating. From test cases to test scripts, automation does not add anything in the process to find more bugs. The test scripts will work only as good as the test cases when comes to finding bugs. So, it’s the test cases that find bugs (or don’t find bugs), not the test scripts.

Eliminate or reduce manual testers: In order to justify automation, some point out that they should be able to eliminate or reduce the number

of manual testers in the long run and thus save money in the process. Absolutely not true. Elimination or reduction of manual testers is not any of the objectives of test automation. Here is why – as I have pointed out earlier that the test scripts are only as good as the test cases and the test cases are written primarily by manual testers. They are the ones who know the application inside out. If the word gets out (it usually does) that the number of manual testers will be reduced by introducing automation then, most if not all manual testers will walk out the door and quality will go with them as well.

A Practitioner's Guide to Software Testing

Page 112 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

PROBLEM There are many pitfalls in automated regression testing. I list a few here. James Bach (one of the LAWST participants) lists plenty of others, in his paper "Test Automation Snake Oil." [3] Problems with the basic paradigm:

Here is the basic paradigm for GUI-based automated regression testing: [4] a. Design a test case, then run it. b. If the program fails the test, write a bug report. Start over after the bug is fixed. c. If the program passes the test, automate it. Run the test again (either from a script or with the aid

of a capture utility). Capture the screen output at the end of the test. Save the test case and the output.

d. Next time, run the test case and compare its output to the saved output. If the outputs match, the program passes the test.

First problem: this is not cheap. It usually takes between 3 and 10 times as long (and can take much longer) to create, verify, and minimally document [5] the automated test as it takes to create and run the test once by hand. Many tests will be worth automating, but for all the tests that you run only once or twice, this approach is inefficient.

Some people recommend that testers automate 100% of their test cases. I strongly disagree with this. I create and run many black box tests only once. To automate these one-shot tests, I would have to spend substantially more time and money per test. In the same period of time, I wouldn’t be able to run as many tests. Why should I seek lower coverage at a higher cost per test? Second problem: this approach creates risks of additional costs. We all know that the cost of finding and fixing bugs increases over time. As a product gets closer to its (scheduled) ship date more people work with it, as in-house beta users or to create manuals and marketing materials. The later you find and fix significant bugs, the more of these people’s time will be wasted. If you spend most of your early testing time writing test scripts, you will delay finding bugs until later, when they are more expensive. Third problem: these tests are not powerful. The only tests you automate are tests that the program has already passed. How many new bugs will you find this way? The estimates that I’ve heard range from 6% to 30%. The numbers go up if you count the bugs that you find while creating the test cases, but this is usually manual testing, not related to the ultimate automated tests. Fourth problem: in practice, many test groups automate only the easy-to-run tests. Early in testing, these are easy to design and the program might not be capable of running more complex test cases. Later, though, these tests are weak, especially in comparison to the increasingly harsh testing done by a skilled manual tester. Now consider maintainability:

Maintenance requirements don’t go away just because your friendly automated tool vendor forgot to mention them. Two routinely recurring issues focused our discussion at the February LAWST meeting.

• When the program’s user interface changes, how much work do you have to do to update the test scripts so that they accurately reflect and test the program?

• When the user interface language changes (such as English to French), how hard is it to revise the scripts so that they accurately reflect and test the program?

We need strategies that we can count on to deal with these issues. Here are two strategies that don’t work:

A Practitioner's Guide to Software Testing

Page 113 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Creating test cases using a capture tool: The most common way to create test cases is to use the capture feature of your automated test tool. This is absurd. In your first course on programming, you probably learned not to write programs like this:

SET A = 2 SET B = 3 PRINT A+B

Embedding constants in code is obviously foolish. But that’s what we do with capture utilities. We create a test script by capturing an exact sequence of exact keystrokes, mouse movements, or commands. These are constants, just like 2 and 3. The slightest change to the program’s user interface and the script is invalid. The maintenance costs associated with captured test cases are unacceptable.

Capture utilities can help you script tests by showing you how the test tool interprets a manual test case. They are not useless. But they are dangerous if you try to do too much with them. Programming test cases on an ad hoc basis: Test groups often try to create automated test cases in their spare time. The overall plan seems to be, "Create as many tests as possible." There is no unifying plan or theme. Each test case is designed and coded independently, and the scripts often repeat exact sequences of commands. This approach is just as fragile as the capture/replay. STRATEGIES FOR SUCCESS

We didn’t meet to bemoan the risks associated with using these tools. Some of us have done enough of

that on comp.software.testing and in other publications. We met because we realized that several labs had made significant progress in dealing with these problems. But information isn’t being shared enough. What seems obvious to one lab is advanced thinking to another. It was time to take stock of what we collectively knew, in an environment that made it easy to challenge and clarify each other’s ideas. Here are some suggestions for developing an automated regression test strategy that works:

1. Reset management expectations about the timing of benefits from automation 2. Recognize that test automation development is software development. 3. Use a data-driven architecture. 4. Use a framework-based architecture. 5. Recognize staffing realities. 6. Consider using other types of automation.

Reset management expectations about the timing of benefits from automation.

We all agreed that when GUI-level regression automation is developed in Release N of the software, most of the benefits are realized during the testing and development of Release N+1. I think that we were surprised to realize that we all shared this conclusion, because we are so used to hearing about (if not experiencing) the oh-so-fast time to payback for an investment in test automation. Some benefits are realized in release N. For example:

• There’s a big payoff in automating a suite of acceptance-into-testing (also called "smoke") tests. You might run these 50 or 100 times during development of Release N. Even if it takes 10x as long to develop each test as to execute each test by hand, and another 10x cost for maintenance, this still creates a time saving equivalent to 30-80 manual executions of each test case.

• You can save time, reduce human error, and obtain good tracking of what was done by automating configuration and compatibility testing. In these cases, you are running the same tests against many devices or under many environments. If you test the program’s compatibility with 30 printers, you might recover the cost of automating this test in less than a week.

A Practitioner's Guide to Software Testing

Page 114 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

• Regression automation facilitates performance benchmarking across operating systems and across different development versions of the same program.

Take advantage of opportunities for near-term payback from automation, but be cautious when automating with the goal of short-term gains. Cost-justify each additional test case, or group of test cases. If you are looking for longer term gains, across releases of the software, then you should seriously thinking about setting your goals for Version N as:

• providing efficient regression testing for Version N in a few specific areas (such as smoke tests and compatibility tests);

• developing scaffolding that will make for broader and more efficient automated testing in Version N+1.

Recognize that test automation development is software development.

You can’t develop test suites that will survive and be useful in the next release without clear and realistic planning. You can’t develop extensive test suites (which might have more lines of code than the application being tested) without clear and realistic planning. You can’t develop many test suites that will have a low enough maintenance cost to justify their existence over the life of the project without clear and realistic planning. Automation of software testing is just like all of the other automation efforts that software developers engage in—except that this time, the testers are writing the automation code.

• It is code, even if the programming language is funky. • Within an application dedicated to testing a program, every test case is a feature. • From the viewpoint of the automated test application, every aspect of the underlying application

(the one you’re testing) is data. As we’ve learned on so many other software development projects, software developers (in this case, the testers) must:

• understand the requirements; • adopt an architecture that allows us to efficiently develop, integrate, and maintain our features

and data; • adopt and live with standards. (I don’t mean grand schemes like ISO 9000 or CMM. I mean that it

makes sense for two programmers working on the same project to use the same naming conventions, the same structure for documenting their modules, the same approach to error handling, etc.. Within any group of programmers, agreements to follow the same rules are agreements on standards);

• be disciplined. Of all people, testers must realize just how important it is to follow a disciplined approach to software development instead of using quick-and-dirty design and implementation. Without it, we should be prepared to fail as miserably as so many of the applications we have tested. Use a data-driven architecture. [6]

In discussing successful projects, we saw two classes of approaches, data-driven design and framework-based design. These can be followed independently, or they can work well together as an integrated approach. A data-driven example: Imagine testing a program that lets the user create and print tables. Here are some of the things you can manipulate:

• The table caption. It can vary in typeface, size, and style (italics, bold, small caps, or normal). • The caption location (above, below, or beside the table) and orientation (letters are horizontal or

vertical). • A caption graphic (above, below, beside the caption), and graphic size (large, medium, small). It

can be a bitmap (PCX, BMP, TIFF) or a vector graphic (CGM, WMF).

A Practitioner's Guide to Software Testing

Page 115 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

• The thickness of the lines of the table’s bounding box. • The number and sizes of the table’s rows and columns. • The typeface, size, and style of text in each cell. The size, placement, and rotation of graphics in

each cell. • The paper size and orientation of the printout of the table.

<<<Big Graphic Goes Here >>> Caption

Tall row � bounding box

Short row

Figure 1 Some characteristics of a table

These parameters are related because they operate on the same page at the same time. If the rows are too big, there’s no room for the graphic. If there are too many typefaces, the program might run out of memory. This example cries out for testing the variables in combination, but there are millions of combinations. Imagine writing 100 scripts to test a mere 100 of these combinations. If one element of the interface should change—for example, if Caption Typeface moves from one dialog box to another—then you might have to revise each script.

Caption Location

Caption Typeface

Caption Style

Caption Graphic (CG)

CG Format

CG Size

Bounding Box Width

1 Top Times Normal Yes PCX Large 3 pt

2 Right Arial Italic No 2 pt 3 Left Courier Bold No 1 pt

4 Bottom Helvetica Bold Italic Yes TIFF Med

ium none

Figure 2 The first few rows of a test matrix for a table formatter

Now imagine working from a test matrix. A test case is specified by a combination of the values of the

many parameters. In the matrix, each row specifies a test case and each column is a parameter setting. For example, Column 1 might specify the Caption Location, Column 2 the Caption Typeface, and Column 3 the Caption Style. There are a few dozen columns. Create your matrix using a spreadsheet, such as Excel. To execute these test cases, write a script that reads the spreadsheet, one row (test case) at a time, and executes mini-scripts to set each parameter as specified in the spreadsheet. Suppose that we’re working on Row 2 in Figure 2’s matrix. The first mini-script would read the value in the first column (Caption Location), navigate to the appropriate dialog box and entry field, and set the Caption Location to Right, the value specified in the matrix . Once all the parameters have been set, you do the rest of the test. In this case, you would print the table and evaluate the output. The test program will execute the same mini-scripts for each row.

A Practitioner's Guide to Software Testing

Page 116 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

In other words, your structure is: Load the test case matrix

For each row I (row = test case) For each column J (column = parameter)

Execute the script for this parameter:

• Navigate to the appropriate dialog box or menu

• Set the variable to the value specified in test item (I,J) Run test I and evaluate the results If the program’s design changed, and the Caption Location was moved to a different dialog, you’d only have

to change a few lines of code, in the one mini-script that handles the caption location. You would only have to change these lines once: this change will carry over to every test case in the spreadsheet. This separation of code from data is tremendously efficient compared to modifying the script for each test case. There are several other ways for setting up a data-driven approach. For example, Bret Pettichord (one of the LAWST participants) fills his spreadsheet with lists of commands. [7] Each row lists the sequence of commands required to execute a test (one cell per command). If the user interface changes in a way that changes a command sequence, the tester can fix the affected test cases by modifying the spreadsheet rather than by rewriting code. Other testers use sequences of simple test cases or of machine states. Another way to drive testing with data uses previously created documents. Imagine testing a word processor by feeding it a thousand documents. For each document, the script makes the word processor load the document and perform a sequence of simple actions (such as printing). A well-designed data-driven approach can make it easier for non-programming test planners to specify their test cases because they can simply write them into the matrix. Another by-product of this approach, if you do it well, is a set of tables that concisely show what test cases are being run by the automation tool. Use a framework-based architecture.

The framework provides an entirely different approach, although it is often used in conjunction with one or more data-driven testing strategies. Tom Arnold (one of the LAWST participants) discusses this approach in his book [8] and courses. The framework isolates the application under test from the test scripts by providing a set of functions in a shared function library. The test script writers treat these functions as if they were basic commands of the test tool’s programming language. They can thus program the scripts independently of the user interface of the software. For example, a framework writer might create the function, openfile(p). This function opens file p. It might operate by pulling down the file menu, selecting the Open command, copying the file name to the file name field, and selecting the OK button to close the dialog and do the operation. Or the function might be richer than this, adding extensive error handling. The function might check whether file p was actually opened or it might log the attempt to open the file, and log the result. The function might pull up the File Open dialog by using a command shortcut instead of navigating through the menu. If the program that you’re testing comes with an application programmer interface (API) or a macro language, perhaps the function can call a single command and send it the file name and path as parameters. The function’s definition might change from week to week. The scriptwriter doesn’t care, as long as openfile(x) opens file x. Many functions in your library will be useful in several applications (or they will be if you design them to be portable). Don’t expect 100% portability. For example, one version of openfile() might work for every application that uses the standard File Open dialog but you may need additional versions for programs that customize the dialog. Frameworks include several types of functions, from very simple wrappers around simple application or tool functions to very complex scripts that handle an integrated task. Here are some of the basic types:

A Practitioner's Guide to Software Testing

Page 117 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

a. Define every feature of the application.

You can write functions to select a menu choice, pull up a dialog, set a value for a variable, or issue a command. If the UI changes how one of these works, you change how the function works. Any script that was written using this function changes automatically when you recompile or relink. Frameworks are essential when dealing with custom controls, such as owner-draw controls. An owner-draw control uses programmer-supplied graphics commands to draw a dialog. The test-automation tool will know that there is a window here, but it won’t know what’s inside. How do you use the tool to press a button in a dialog when it doesn’t know that the button is there? How do you use the tool to select an item from a listbox, when it doesn’t know the listbox is there? Maybe you can use some trick to select the third item in a list, but how do you select an item that might appear in any position in a variable-length list? Next problem: how do you deal consistently with these invisible buttons and listboxes and other UI elements when you change video resolution? At the LAWST meeting, we talked of kludges upon kludges to deal with issues like these. Some participants estimated that they spent half of their automation development time working around the problems created by custom controls. These kludges are a complex, high-maintenance, aggravating set of distractions for the script writer. I call them distractions because they are problems with the tool, not with the underlying program that you are testing. They focus the tester on the weaknesses of the tool, rather than on finding and reporting the weaknesses of the underlying program. If you must contend with owner-draw controls, encapsulating every feature of the application is probably your most urgent large task in building a framework. This hides each kludge inside a function. To use a feature, the programmer calls the feature, without thinking about the kludge. If the UI changes, the kludge can be redone without affecting a single script.

b. Define commands or features of the tool’s programming language.

The automation tool comes with a scripting language. You might find it surprisingly handy to add a layer of indirection by putting a wrapper around each command. A wrapper is a routine that is created around another function. It is very simple, probably doing nothing more than calling the wrapped function. You can modify a wrapper to add or replace functionality, to avoid a bug in the test tool, or to take advantage of an update to the scripting language. Tom Arnold [9] gives the example of wMenuSelect, a Visual Test function that selects a menu. He writes a wrapper function, SelMenu() that simply calls wMenuSelect. This provides flexibility. For example, you can modify SelMenu() by adding a logging function or an error handler or a call to a memory monitor or whatever you want. When you do this, every script gains this new capability without the need for additional coding. This can be very useful for stress testing, test execution analysis, bug analysis and reporting and debugging purposes. LAWST participants who had used this approach said that it had repeatedly paid for itself.

c. Define small, conceptually unified tasks that are done frequently. The openfile() function is an example of this type of function. The scriptwriter will write hundreds of scripts that require the opening of a file, but will only consciously care about how the file is being opened in a few of those scripts. For the rest, she just wants the file opened in a fast, reliable way so that she can get on with the real goal of her test. Adding a library function to do this will save the scriptwriter time, and improve the maintainability of the scripts. This is straightforward code re-use, which is just as desirable in test automation as in any other software development.

A Practitioner's Guide to Software Testing

Page 118 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

d. Define larger, complex chunks of test cases that are used in several test cases. It may be desirable to encapsulate larger sequences of commands. However, there are risks in this, especially if you overdo it. A very complex sequence probably won’t be re-used in many test scripts, so it might not be worth the labor required to generalize it, document it, and insert the error-checking code into it that you would expect of a competently written library function. Also, the more complex the sequence, the more likely it is to need maintenance when the UI changes. A group of rarely-used complex commands might dominate your library’s maintenance costs.

e. Define utility functions. For example, you might create a function that logs test results to disk in a standardized way. You might create a coding standard that says that every test case ends with a call to this function. Each of the tools provides its own set of pre-built utility functions. You might or might not need many additional functions.

Some framework risks You can’t build all of these commands into your library at the same time. You don’t have a big enough

staff. Several automation projects have failed miserably because the testing staff tried to create the ultimate, gotta-have-everything programming library. Management support (and some people’s jobs) ran out before the framework was completed and useful. You have to prioritize. You have to build your library over time. Don’t assume that everyone will use the function library just because it’s there. Some people code in different styles from each other. If you don’t have programming standards that cover variable naming, order of parameters in function interfaces, use of global variables, etc., then what seems reasonable to one person will seem unacceptable to another. Also, some people hate to use code they didn’t write. Others come onto a project late and don’t know what’s in the library. Working in a big rush, they start programming without spending any time with the library. You have to manage the use of the library. Finally, be careful about setting expectations, especially if your programmers write their own custom controls. In Release 1.0 (or in the first release that you start automating tests), you will probably spend most of your available time creating a framework that encapsulates all the crazy workarounds that you have to write just to press buttons, select list items, select tabs, and so on. The payoff from this work will show up in scripts that you finally have time to write in Release 2.0. Framework creation is expensive. Set realistic expectations or update your resume. Recognize staffing realities.

You must educate your management into several staffing issues. First, many testers are relatively junior programmers. They don’t have experience designing systems.

Poorly designed frameworks can kill the project too. So can overly ambitious ones. To automate successfully, you may have to add more senior programmers to the testing group. Second, many excellent black box testers have no programming experience. They provide subject matter expertise or other experience with customer or communications issues that most programmers can’t provide. They are indispensable to a strong testing effort. But you can’t expect these people to write automation code. Therefore, you need a staffing and development strategy that doesn’t require everyone to write test code. You also want to avoid creating a pecking order that places tester-programmers above tester-non-programmers. This is a common, and in my view irrational and anti-productive, bias in test groups that use automation tools. It will drive out your senior non-programming testers, and it will cost you much of your ability to test the program against actual customer requirements. Non-programmers can be well served by data-driven approaches that let them develop test cases simply by entering test planning ideas into a spreadsheet.

A Practitioner's Guide to Software Testing

Page 119 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Third, be cautious about using contractors to implement automation. Develop these skills in-house, using contractors as trainers or to do the more routine work. Finally, you must educate management that it takes time to automate, and you don’t gain back much of that time during the Release in which you do the initial automation programming. If you are going to achieve your usual level of testing, you have to add staff. If a project would normally take ten testers one year for manual testing, and you are going to add two programmer-years of automation work, you will have to keep the ten testers and add two programmers. In the next Release, you might be able to cut down on tester time. In this Release, you’ll save some time on some tasks (such as configuration testing) but you’ll lost some time on additional training and administrative overhead. By the very end of the project, you might have improved your ability to quickly regression test the program in the face of late fixes, but at this last-minute point in the schedule, this probably helps you test less inadequately, rather than giving you an opportunity to cut staffing expense. Consider using other types of automation

The LAWST meeting focused on GUI-level regression tools and so I have focused on them in this article. Near the start of the LAWST meeting, we each described our experiences with test automation. Several of us had dramatic successes to report, but most of the biggest successes involved extensive collaboration with programmers who were writing the application under test. The types of tools used in these success stories varied widely, reflecting the many different kinds of benefits available from many different kinds of testing tools. There is too much hype, mythology, and wishful thinking surrounding GUI-based regression automation. They can create an illusion of testing coverage where no significant coverage exists, they can cause serious staff turnover, and they can focus your most skilled staff into designing and maintaining test cases that yield relatively few bugs. These tools can be genuinely useful, but they require a significant investment, careful planning, trained staff, and great caution.

STEP APPROACH TO TEST TOOL EVALUATION

Compatibility Issues:

• The Tool has to be compatible with The operating system(s) your application supports • The Tool has to be compatible with The development environment(s) used to create your application (either

MS, Java) • The Tool has to be compatible with Third-party software with which your application integrates. • While testing database application, tool should be able to retrieve data from and possibly insert data into the

database without using the user interface. • Tool should handle “custom controls” in our application. In cases where Some GUI “custom controls” are

created instead of using standard Windows controls.

Budget Constraints • Has Tool budget taken care of training and implementation costs as well as licensing costs into

account. • Budget to take into account additional machines on which to run the scripts. (If the tool will be used for

unattended test execution.)

Business Requirements • Follow company requirements that vendors must meet in order to become “approved suppliers.” • Follow Vendor Evaluation procedures as described in Purchase Process.

A Practitioner's Guide to Software Testing

Page 120 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

FEATURES THAT ARE IMPORTANT IN ANY GOOD TOOL

Scripting language

• The tool must have a scripting language of some kind that contains the usual programmatic constructs.

• Enable us to edit recorded scripts • Support variables and data types • Support arrays, lists, structures, or other compound data types • Support conditional logic (IF and CASE statements) • Support loops (FOR, WHILE) • Enable us to create and call functions • Should be able to run scripts in Batch mode - unattended execution. • Compiled modules – a Library of functions.

UI element identifiers

• Identify the UI elements in a variety of representative windows (Controls, Objects and bitmaps) • Handle any element identified to test all its functionalities and attributes.

Reusable libraries

• Should be able to create a function or subroutine that performs the search. (While searching for

records in the db, and the sequence of steps changes slightly, no need to update every script) • Scripts we create with the tool should easily call the functions we put in the library • The functions should take parameters • E.g.: if you create a login function, you’ll want to specify the user name and password at the time that

you call the function (rather than embedding that information in the function itself).

Outside libraries

• In addition to creating our own libraries, it should access outside libraries. • It should be able to call into .dll files. • Should be able to check that a changed value was actually written to the database, not just changed on

the screen. • Should be able to check whether a transaction was correctly and completely logged, even if the UI

gives no access to the log. • In general, these tests should determine “pass” or “fail” more accurately than by checking the value

through the user interface. • If testing on a Windows system, we will also want access to the Windows API. The Windows API

enables to get system information that would be difficult or impossible to obtain in any other way. (For example, it’s very useful to be able to get or set the value of a registry key from within the automated scripts.)

A Practitioner's Guide to Software Testing

Page 121 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Abstract layers

• “GUI map” abstract layer to make maintaining our tests, easier. • This should support any change in the UI identifiers in one place instead of changing all scripts when

a change is made in the underlying identifiers.

Distributed tests

• Should have distributed test capability – While testing multi-user software, we should be able to create tests that involve multiple simulated users.

• Should enable to specify the machine on which to execute a given command, in cases where one user on one machine locks a file while another user on another machine tries to open the same file.

• Besides this, should also have the ability to launch a test on a remote machine, where in the remote machine executes that test from beginning to end.

• (In other words) The tool should to be able to create a test that waits for an action (such as locking a file) to be complete on the first machine before beginning an action (such as attempting to open the file) on the second machine.

File I/O

• The tool should provide functions that enable you to open a file on the hard disk (usually an ASCII file) programmatically, read from it, write to it, and close it, (central to “data-driven test automation.”)

• For this, the script that uses test data from a file to drive the test activity, Data-driven testing should make it possible to automate a large number of tests with a minimal amount of test automation code.

• Should ideally provide a Data Driven wizard to test for similar scenarios and multiple data. Hence by giving the range, it should be able to carry out tests.

• Should provide functions for handling Windows .ini files.

(E.g.: if the software under test needs to know which server to use, then it’s a good idea to specify the server name in an .ini file. Then you can change the test server without having to change the automated scripts. )

Database testing

• Using database checkpoints should be able to handle database tests and server level tests. •

Error handling

• Should have a good error-handling system that makes it possible for other scripts to execute even after one script fails. (Should continue even if an error dialog box appears)

• In such a case, the tool could stop the failed script, then reset the software to its initial state before starting the next script.

• Preferred if Error-handling capability of the tool is customizable. • Should also support Exception handling – Application exceptions, TSL exceptions, and Object

exceptions.

A Practitioner's Guide to Software Testing

Page 122 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

E.g.: Say that the product has known error conditions that require a certain amount of cleanup to fix. The automated tests will be even more robust if you can extend the error handling system so that it recognizes these errors and performs the required cleanup.

The debugger

• A test run successfully on one m/c should also run successfully on any other m/c. • The debugger should enable user to find the problem much more quickly than a trial-and-error

approach. • The debugger could be ideally built into the test script development environment. • The Debuggers should enable to step through the script line by line, set “break points” and inspect the

currently defined variables and their values. • Debugger should enable user to put a break point on any executable line, whether it’s in the script

under test or in the supporting code (in the reusable libraries, for example).

Source control

• The source control system should allow user to check files into and out of a master repository, roll back to previous versions, find differences between versions, and track several projects simultaneously.

• These features make it possible for multiple people to work on multiple versions of source code files, rather than looking for a test tool that includes source control features, it’s actually best if we use the same source control system that the software developers use. The tool should have options for this.

• The practical advantage to using the same source control system is that we can take advantage of the fact that there is already an established way of working.

• Tool to suggest the best way to use source control with the centralized file location if tool suggests handling of all files in a single location.

Command line script execution • It should have the ability to run scripts from the command line, as it makes it easier to set up tests that

reboot the machine and restart the tests automatically after the machine is restarted. • This in turn should make it possible to automatically kick off automated tests after each build.

The user community

• Last but not the least, Tool that has an established user community - Discussion groups, users’ web sites, and local user groups.

• Members of such user community should be open to often share libraries of common functions or other useful bits of source code and information. This can help in developing our own internal reusable libraries.

A Practitioner's Guide to Software Testing

Page 123 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Types of Tests to Ideally support:

• Regression testing • Coverage Analysis • Load test • Stress Test (Periodicity and frequency) • Interactive tests • Functional test (when need to be repeated and on situations) • Sanity testing (always?) • System Testing (always?) • Memory Test • Performance test • Acceptance Testing • Cases to automate - when we want to do:

Types of Attributes to Support

• Speed • Repeatability • Coverage • Reliability • Reusability • Programmability

Coverage

• Tests must be run for every build • Multiple data values for same actions (data-driven tests) • Identical tests using different browsers • Mission-critical pages • Transactions on pages which don't change over the short term • Load simulation

A Practitioner's Guide to Software Testing

Page 124 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

RATIONAL ROBOT:

What is Rational Robot?

Rational Robot is a tool that allows you to automate your test scripts. Automation is very useful for regression and performance testing. Regression testing involves re-running previously tested and passed functionality to assure that any new enhancement(s) has/have not corrupted these parts of the software system. Performance testing involves a set of performed tests that can allow the tester to evaluate ‘performance related characteristics of the target-of-test such as the timing profiles, execution flow, response times, and operational reliability and limits.’ Automation can increase the test coverage of your application, speed up the process of testing and make it more reliable. Repeatable, tedious and mundane actions can be scripted, allowing the tester to get involved in more creative and challenging tasks.

Rational records in the SQA Basic language; as the name suggests this is a BASIC derived language that is similar to VB. We should be treating Robot as a programming environment, where we can manually edit scripts, to include things like conditional statements, loops, custom date types, variables etc. From the libraries in the SQABAS32 folder we can call sub-routines and functions. We should be looking to put all re-usable code in this folder.

We can also create datapools which scripts (.rec) files can call from. Datapools are a source of variable test data that can be used in a script. We can call these through a custom Rational database or from Excel or Access. (We will deal with creating and using datapools in another document)

Creating a script

It will be best to record the scripts through TestManager so we can tie them to requirements or other test inputs. However if we wish to create a script in Robot we can by just hitting the GUI command button on the toolbar.

As you may well know Robot records the mouse clicks and keystrokes of the user. Try to use keystrokes instead of mouse clicks and try not to use the scroll-bar as you will find difficulty on playback.

Name(s) of Test Scripts

We should name the scripts as much as possible with the names of the requirements that will be input in Test Manager. So if we a have a requirement that states that ‘Tender Trust must be able to upload a tender’; that script will be named ‘UploadTenderTrust’ for example. We will talk more about scripting standards at another date.

A Practitioner's Guide to Software Testing

Page 125 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Recording

During recording you will be shown a floatable toolbar that looks like this:

You are able to move this toolbar anywhere within the screen. The movement of the toolbar will not be

recorded in the script.

Obviously you will be able to discern what are the pause and stop recording options. The other two are a

little less intuitive.

This button opens the Robot window while recording. I wouldn’t bother about this too much. If I want to

manually enter something in the script while recording the script, I normally pause, then type in the code.

Verification Points

This option you will use more than the stop button. It allows you to create verification points (VP’s). A verification point is ‘a point in the script that the tester creates in order to confirm the state of an object across builds of the application under test.’ During recording, a verification point captures object information (based on the type of verification point) and stores it in a baseline data file. The information in this file becomes the baseline of the expected state of the object during subsequent builds.

When we play back on a new build, the state of the object is compared against the baselined verification point. If the captured data does not match the baseline, Robot creates an actual data file. The information in this file shows that actual state of the object in build.

The results of the verification are now displayed in TestManager in the TestLogViewer. A green flag shows us that the VP has passed, while a red flag indicates that it has failed. If the VP has failed we can double-click on the flag and we will be shown where it failed (i.e. there is a comparison of the baselined data with the actual data.)

A Practitioner's Guide to Software Testing

Page 126 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Types of Verification Points

The verification toolbar:

The verification toolbar allows you to choose different types of verification points.

• Alphanumeric: The alphanumeric test captures and tests alphanumeric data in windows objects that contain text, such as edit boxes. You can use the VP to verify that text has not changed, to catch spelling errors and to ensure that numeric values are accurate.

• Clipboard: Captures and compares alphanumeric data that has been copied to the clipboard

• File Comparison: A file comparison compares two specified files during playback. To insert of file comparison VP, click Insert > VP > File Comparison. When you create the VP, you specify the drive, directory and file name(s). You can use the browse facility. During playback, Robot compares the files byte-for-byte.

• File Existence: Verifies the existence of a specified file during playback. Click insert > VP > File

Existence. During playback Robot checks to see if the file exists in the specified location. Could use for example, to see if registry files have been added.

• Menu: The menu VP captures and compares the menu title, menu items, shortcut keys and the state of selected menus. Robot records information about the top menu and up to five levels of sub-menus. Robot treats menu items as objects within a menu and tests their content, state, and accelerator keys regardless of the menu item’s location.

• Object Data: Object Data captures and compares the data inside standard Windows objects.

• Object Properties: The Object Properties VP captures and compares the properties of standard Windows objects.

• Region and Windows Image comparators - Avoid these like the plague. They are guaranteed to fail. They capture the screen as a bitmap (pixel by pixel representation) and depending on your screen resolution, graphics card, colour palette, Operating System ad nauseam. I use window and region image comparators conditionally:

Result = WindowVP (Exists, "Caption=TenderTrust - Microsoft Internet Explorer",

"VP=Window Existence") If Result = 0 Then

A Practitioner's Guide to Software Testing

Page 127 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Result = WindowVP (CompareImage, "Caption=TenderTrust - Microsoft Internet Explorer", "VP=Window Image")

End If

For example, if the Tender Trust window does not exist (note the window existence VP), I want to know what appeared in it’s place, so I capture the image.

• Web Site Compare Captures a baseline of a Web site and compares it to the Web site at another point in time. (Not actually used this so give it a try)

• Web Site Scan Checks the contents of a Web site with every revision and ensures that changes have not resulted in defects

CREATING A WAIT STATE

When you create a verification point, you can add specific wait values to handle time-dependent test activities. Wait values are useful when the application requires an unknown amount of time to complete a task. Using a wait value keeps the verification point from failing if the task is not completed immediately or if the data is not accessible right away.

We’re in the middle of creating a windows existence VP on the Robot window. Note, that I have named the verification point so it will give me/us an indication as to what it is doing. And notice the post-fix ‘WE’ (short for windows existence). For Object Properties, put a post-fix ‘OP’, Object Data ‘OD’, Window Image ‘WI’ etc.

A Practitioner's Guide to Software Testing

Page 128 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

So in this dialog we have ticked the ‘Apply wait state to verification point’ checkbox and this enables the two edit boxes within the dialog. So in the example above, Robot will search for the existence of its own window; it will look every two seconds and after thirty seconds will forget all about it. We want the expected result to equal the actual, hence the radio button chosen.

The code in Robot will look like this:

We can change the wait states, by going into Robot and manually editing the figures in the code.

WORKING WITH THE DATA IN DATA GRIDS

When you create a Clipboard, Menu, or Object Data verification point and select an object, you are actually testing the object's data. This data appears in a Robot data grid, which shows data in rows and columns. You use the data grid to select and edit the data to test.

In the above example, I have done on Object Data VP on the ‘Find’ utility in Microsoft Outlook.

We can edit this data manually (see below) or we can use any of the following methods to select data in the columns, rows, or cells of the data grid. If we want a range of contiguous cells click and drag the pointer over a range of cells. If we want to select non-contiguous cells press the CTRL key and click each cell. Clicking without the CTRL key cancels previous selections. If you want to select an entire column click a column title. To test an entire row, click on a row number; Robot compares the data and the number of items in a row. To test all cells click the box in the upper-left corner of the grid.

To edit data in a grid double-click a cell in the data grid and edit the data in the cell. To accept the changes, press ENTER. To cancel the changes, press ESC.

A Practitioner's Guide to Software Testing

Page 129 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

SOME USEFUL SQA BASIC SCRIPTING COMMANDS

1.SQALogMessage Writes a message to a test log viewer in TestManager. It is optional as to whether you set a flag . The

flags that can be set are pass fail, warning or no flag at all. This is useful for notation. The syntax for this command is: SQALogMessage code%, message$, description

Syntax Element Description code% sqaPass or True

Inserts Pass (green flag) in the Test Log Viewer result column

sqaFail or False Inserts Fail (red flag) in the Test Log Viewer result column

sqaWarning Inserts Warning (yellow flag) in the Test Log Viewer result column

sqaNone Leaves the Result column blank for the message entry.

Messages$ The message to insert in the log. The message appears in the Event Type column of the log.

description$ A description of the log message. The description appears in the Failure Description field of the Log Event – Log Message dialog box.

The Log Event – Log Message dialog box looks like this: Log Event – Log Message Dialog Box

This dialog is displayed by right clicking the message in the log and clicking properties.

Example: SQALogMessage SQAFail, “Test Bombed Big-Time”,””

A Practitioner's Guide to Software Testing

Page 130 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

2.SQAGetProperty

Retrieves the value of the specified property. The syntax for this command is: status% = SQAGetProperty(recMethod$, property$, value) Syntax Element Description recMethod$ The recognition method values you use to identify

an object depend on the object you are accessing. For example, if you are accessing a push button object, use the recognition method values listed for the PushButton user action command.

property$ A case-sensitive property name e.g “Text” value An output argument of type variant that will

contain the retrieved property value. Example: SQAGetProperty("Type=CheckBox;Text=Match case", "State", CheckState)

3.SQAWaitForObject Pauses execution of the script until the specified object can be found. The syntax for this command is: SQAWaitForObject (recMethod$,timeout&) Syntax Element Description recMethod$ The recognition method values you use to identify

an object depend on the object you are accessing. timeout& The maximum number of milliseconds to look for

the object. If the object does not appear Robot’s search will end.

Example: Result = SQAWaitForObject ("Type=PushButton;Text=OK", 120000) If Result = sqaSuccess Then Msgbox “Fail” End If

4.SQAConsoleWrite Writes the specified text to the Robot console window.

The syntax for this command is: SQAConsoleWrite text$

The console window is in the left hand corner of Robot

A Practitioner's Guide to Software Testing

Page 131 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Choose the console tab

If we write this in the .rec file, ‘SQAConsoleWrite "Start of Playback"’, you get the above in the console. The console can help you keep track of you’re script.

5.Callscript

Calls a script from another script and executes it from within the currently-running script.

The syntax is: CallScript script$ Syntax Element Description script$ Name of the script to be called and executed

6.DelayFor

Delays execution of the script for a specified number of milliseconds.

The syntax is: DelayFor TimeInterval& Syntax Element Description TimeInterval& Time in milliseconds to delay e.g. Example: DelayFor 2000

7.StartBrowser

Starts an instance of the browser, enables Web testing, and loads a Web page if one is specified.

The syntax is: StartBrowser [URL$,] [WindowTag=Name$]

Syntax Element Description URL$ The Universal Resource Locator of the Web page

to load. Name$ An optional name that specifies this instance of the

browser. In subsequent user actions, WindowTag=Name$ is used in the recMethod$ argument of the Window SetContext command to identify this instance of the browser.

An example would be: StartBrowser "http://www.rational.com/", "WindowTag=Instance1"

A Practitioner's Guide to Software Testing

Page 132 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

NOTE: You must use the StartBrowser command within Robot enables Web object recognition. If you start a

Web browser outside of Robot (that is, without using the StartBrowser command), you must open rbtstart.htm in your browser, or run the Rational ActiveX Test Control that rbtstart.htm references, before loading Web pages for testing.

8.StartApplication

Starts the specified application from within the currently running script. The syntax is: StartApplication Pathname$ Syntax Element Description Pathname$ The full path and file name of the application to be

started. Arguments can be included. An example would be: StartApplication """c:\program files\rational\rational test\rtinspector.exe"""

USING WILDCARDS IN WINDOW CAPTIONS A window caption is located in its title bar. Often, a window caption is used to help identify an object in a recognition method. When you specify a window caption in a recognition method, you can type the entire caption, or you can use the following wildcards: Wildcard character Description Question mark (?) Matches a single character in a caption Asterisk (*) Matches any number of caption characters from the

asterisk to the next character or, if there are no characters after the asterisk, to the end of the caption.

Example: Window SetContext, “Caption = {?otepad}”,”” When using wildcard characters in a caption, enclose the caption within braces.

Editing, Compiling, and Debugging Scripts

You can edit the text of any open script. Before starting to edit, you must have a script open. The script can either be one that has been just recorded or a script that has just been opened.

Compiling a script

When you play back a GUI script, or when you debug a GUI script, Robot compiles the script if it has been modified since it last ran. You can also compile scripts manually, by pressing this toolbar button.

A Practitioner's Guide to Software Testing

Page 133 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Locating Compilation Errors During compilation, the Build tab of the Output window displays compilation results and error messages with line numbers for all scripts and library source files.

Double-click the error or warning in the Build tab, to locate compilation errors in the Script window.

Debugging GUI Scripts Robot includes a complete, built-in debugging environment to assist you during the development phase of your GUI scripts. To debug a GUI script, use the Debug menu commands or toolbar buttons. The debug commands are:

• GO Go Plays back the currently open script. It executes the script, until either the next breakpoint or the end of the script, whichever comes first.

• GO UNTIL CURSOR Plays back the currently open script, stopping at the text cursor position. It executes until either the next breakpoint or the end of the script, whichever comes first.

• ANIMATE Plays back the currently open script, displaying a yellow highlighted line as it executes. It executes until either the next breakpoint or the end of the script, whichever comes first.

A Practitioner's Guide to Software Testing

Page 134 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

• SET OR CLEAR BREAKPOINTS Sets or clears a breakpoint at the cursor position. If you set a breakpoint, Robot inserts a solid red circle in the left margin or highlights the line. If you clear a breakpoint, Robot removes the circle.

• STEP INTO Begins single-step execution, that is executes one command at a time.

• INSERT AT CURSOR We can continuing recording anywhere in the script by clicking this button. Robot will record at the cursor position.

Setting and Clearing Breakpoints Robot lets you set any number of breakpoints in a script. A breakpoint is a location in a script where you want execution to stop. When execution stops at a breakpoint, you can examine the value of a variable or check the state of an object before it is modified by a subsequent command. You can then resume execution until the next breakpoint or the end of the script. To set up a breakpoint, in an open script, click Debug > Go.

Examining Variable Values You can examine variable and constant values in the Variables window as you play back scripts during debugging. The Variables window appears in the lower-right corner of the Robot main window. If the Variables window is not open, click View > Variables to open it. We can only see datatypes passed to variables in the variable window during debug.

This puts the caption of Acrobat Reader into the variable ‘value’ (bit of a pointless exercise, because we already have the caption property), but we can demonstrate that the value of the variable appears in bottom right hand corner of Robot.

A Practitioner's Guide to Software Testing

Page 135 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Library Source Files

Library source files allow us to create custom sub-routines and functions than we can call in our .rec files. Rational has number of functions that we can use and are held in the ‘SqaUtil’ folder. Most Rational users will use these ready-made functions on a daily basis: for example SQAGetProperty allows us to retrieve the property of an object, which we can store in a variable.

In .rec file we would go:

Dim Result As Integer Dim value As String

Result = SQAGetProperty (“Name=myObject”, “Text”, value)

So the text value will be held in the variable ‘value’ and we could compare it with another value, for

example. The Result is boolean, so if for some reason the text property is not picked up by Robot we will be returned with a nought ‘0’. The SQA Language Ref is full of Robot internal functions. The manual comes with TestStudio or it can be accessed on-line.

I- SBL/H’s

We cannot automate a project with record and playback. For example if a developer has been given a requirement to add a dialog box on a PC application and we’ve got 30 scripts that have been recorded and deal in that area. We’ve got to go into 30 .rec files and copy and paste the code in. What if there were 100 scripts, a 1000? The time that is wasted could be quite substantial; not only this it is de-motivating and regression testing is never really done. If however we used the libraries then we could call a procedure into a .rec file, changing the code only once.

If I can quote from the user guide: ‘Header files have an .SBH extension and contain the procedure declarations and global variables referred to in your test procedure script files. Source code files have .SBL extensions and contain the procedure definitions you use in your script files. Header and source files are stored in *\SQABAS32…’

II- How do I create an SBL?

Go into Robot and choose File > New > SQABasic File. You will be presented with the below dialog box.

Double click on Library Source file and write your code here:

A Practitioner's Guide to Software Testing

Page 136 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

So for example, the simple function above uses the internal date function, formats it puts it into upper-case, and stores it in CurrentDate. Now go into the .SBH file, using the same method: File > New > SQABasic File, but select the Header File option in the ‘New SQABasic file’ dialog box. Declare the function as below

We need to reference the header in the .rec file. We do this by using the ‘$Include metacommand.

A Practitioner's Guide to Software Testing

Page 137 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

We can now call the function in a procedure, and we’ve created a msgbox to view the result. And we’ve got today’s date

A Practitioner's Guide to Software Testing

Page 138 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

RATIONAL TEST MANAGER

What is Rational Test Manager? For QA purposes Test Manager allows us to define, plan and co-ordinate tests. It also tells us the state of our testing effort and informs us of the quality of the software under test. It is heavily integrated with Rational Robot, which allows test automation. Script playback is executed by virtual testers.

What is a Virtual Tester? A virtual tester is an instance of an automated test script running on a computer. For functional tests, only one virtual tester at a time can run on a computer. For performance tests, many virtual testers can run on a computer simultaneously.

Planning Tests Test Inputs

The first step in planning any testing effort is to identify the test inputs. A test input ‘is anything that tests

depend on or anything that needs validation’, for example requirements. Test inputs help you decide what you need to test. TestManager can pull in Test Input types from Requisite Pro.

Test Plans, Test Folders and Test Cases

When you have identified your test inputs, you can use TestManager to create a test plan. The Test Plan contains information about the purpose and goals of testing within the project, and the strategies used to execute and implement those tests. Projects can contain multiple test plans. You may have a test plan for each phase of testing. Within a test plan, you can create test case folders. Test Case Folders allow you to organise test cases logically and hierarchically. Within a test case folder is a test case. A test case is ‘a set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement’. You develop test cases to define the individual things you need to validate to ensure that the system is working the way it is supposed to work. We use iterations to specify when a test case must pass. We use configurations to tell us on what hard/software the test case must be on, eg what OS.

A Practitioner's Guide to Software Testing

Page 139 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Building a Test Hierarchy in Test Manager

1.The Test Plan

To create a Test Plan right-click on the Test Plan folder in the Test Asset workspace and choose New Test Plan. You will receive this dialog box:

Enter the name of your test plan in the ‘Name’ edit box.

If the test plan has any documents relating to it, choose the external documents tab.

A Practitioner's Guide to Software Testing

Page 140 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

If you select ‘Add’ you will be able to browse to any document on your machine or on the network. This

is useful if you want all documents in on place.

When a test-plan is created, the below window will appear. You will also receive this window if you open a test-plan.

2. The Test Case Folder

To create a test-case folder, right-click on the test plan in the test plan window and choose Insert Test Case Folder.

A Practitioner's Guide to Software Testing

Page 141 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

3. Test Cases

In the hierarchy test cases are below test case folders.

You create a test case by right clicking on a test case folder and choosing inserting test case. This dialog should appear

Give a name to your test-case and click the ‘Design’ command button to open design editor.

We will talk about the design editor later in the document.

A Practitioner's Guide to Software Testing

Page 142 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Iterations and Test Assets

An iteration ‘is a distinct sequence of activities with a baslined plan and valuation criteria resulting in a release (internal or external).’

Many test organisations plan more tests cases than they can actually execute at a given time. You can create all the test cases in TestManager, and then use iterations to identify which test cases actually need to run and when they need to pass.

Multiple iterations can be associated with a test case. The iterations indicate when a test case must pass. In many organisations a tester works with developers to determine at which iterations the test case needs to pass.

5.1 Associating Test Assets

In the Test Plan window, right click a test case and click associate iteration. You can follow similar steps to associate iterations within a test plan or test case folder. When you associate iterations with an object, all new objects that are direct children of the object inherit the iteration.

5.1.1 Test Input Association

In the test plan window, right click a test case and click Associate Test Input. The Test Inputs appear.

When you select the test input, it becomes associated with that test case.

Designing Tests

To open the Test Design window, just right click on a created test-case and choose ‘Design’.

Use the design editor to include all of the steps and all the verification points that should be included in the test script. A step is indicated by a footprint and an automatic font; while a verification point has a tick as type and blue font characters. Just mouse-click in the type grid to change from step to verification point and vice-versa. To add a new row, use the downward pointing cursor key. To extend the capacity of the row press Enter.

A Practitioner's Guide to Software Testing

Page 143 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

You should be able to design you tests based on test inputs. If you are using Robot, you should able to start the tool and follow the steps documented in the test case’s design to create a test script. The test steps and verification point grid is printable. Look at the test design to see what test steps we are re-using, so we can create a procedure in the library.

Implementing Tests

After you’ve created the test design for each test case, you can implement the test case. You implement a test case by building a test script and then associating that test script with the test case. You associate a test case with an implementation by going into the Test Plan window, right-clicking on the test- case and selecting properties. You will be faced with this dialog:

Select the implementation tab and depress the ‘Select’ command button in ‘Automated implementation’ frame. For the time being choose ‘GUI – (Rational Test Datastore)’. You will receive this window:

A Practitioner's Guide to Software Testing

Page 144 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Select the test script(s) you want to associate with the test case and press ‘OK’.

If we want to run this test case and the script(s) associated with it right click in the test-plan window and choose run. You will be presented with the ‘Run Test Cases’ window:

We can run the test case and associated script(s) by clicking ‘OK’.

A Practitioner's Guide to Software Testing

Page 145 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Note:

When we run a script or test-case implementation we may be presented with a dialog akin to the one below:

We will need to change the build number to the actual build we will be working on. So if we are working on build ‘2.2.0’, we should enter this in the build combo box by clicking on the command button.

In the log-folder, enter the date that the test is run, so we’ll have a dialog looking like the one below,

To execute a script with a single script from Test Manager go to File > Run Test Script, and choose the type of script you wish to run.

Creating Suites

A suite can inform us of many things about the tests that will be run. A suite can contain information such as: user or computer groups, the number of virtual testers in each group, which test scripts the group(s) run, and how many times each test script runs. To create a suite you can use the dialog suite wizard. You must have test scripts or test cases available for use to create a suite.

To create a new suite goto File > New Suite. You will be presented with this dialog:

A Practitioner's Guide to Software Testing

Page 146 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

For the moment choose the ‘Functional Testing Wizard’ radio button and press OK. This will lead you to this dialog.

Press Select and you will be presented with the Test Case Selection dialog:

A Practitioner's Guide to Software Testing

Page 147 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Choose the test-cases you wish to add to the Test Suite and press next.

Now you must add the test scripts Press the ‘Select’ command button to select the scripts you wish to add.

A Practitioner's Guide to Software Testing

Page 148 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Select scripts

And they will appear in the succeeding screen

When you finish the associations, you will be presented with this screen.

A Practitioner's Guide to Software Testing

Page 149 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

In this window you can also edit scripts by selecting a script and right-clicking, choose ‘Open Test Script.’

Opening Suites To open a suite, goto File > Open Suite. You will be presented with the below window

A Practitioner's Guide to Software Testing

Page 150 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

And just select the suite you want open.

You can insert a test asset by right clicking on a test asset in the suite window and choosing insert.

We know what a test case, a test script and a suite are.

A delay tells Test Manager how long to pause before it runs the next item in the suite. I would suggest putting a delay in between all the scripts in the suit - because they have a tendency of falling over each other. To insert a delay, choose delay from the sub menu (above) and insert the number of seconds you want the delay to be for.

A scenario is used to group test scripts together and run those on different computers, this is mostly used in performance tests where you profile your application. For example on a system like Amazon.com you have different types of users, one types the ISBN, hits the order button and is finished, the next searches for keywords etc., another one might just browse the site. In a scenario you would have scripts for each of them and run them in a different weight, for example 20% User1, 50 % User2 and 30% User3.

A selector allows us to define the items that Robot executes, and the sequence in which the items are executed. For the moment leave the sequential default.

A synchronization point allows the tester to coordinate the activities of a number of virtual testers by pausing the execution of each virtual tester at a particular point. The Transactor defines the number of user-defined tasks that each virtual tester runs in a given time period. For example, you might be testing a Web server, and you want the server to be able to support 100 hits per minute. To model this time-based behavior you use a transactor.

A Practitioner's Guide to Software Testing

Page 151 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Executing Tests

The activity of running your tests is primarily running the implementations of test cases to make sure that the system functions correctly. In Test Manager, you can run:

• Automated test scripts • Manual test scripts • Test cases • Suites

Running Automated Test Scripts

1. Click File > Run Test Script, and select the test script type. 2. Select the test script to run.

The progress bar and default views

When you run a suite, TestManager displays the monitoring information in a Progress bar and in views. The Progress bar gives you a quick summary of the state of the run and cannot be changed. You can change the views, however, to provide summary information or detailed information about each virtual tester. You can change the default view of the script run, by choosing one of the different options on the menu bar.

Evaluating Tests

The Test Log

After you run a suite, test case, or test script, TestManager writes the results to a test log. You use the Test Log window of Rational TestManager to view the test logs created after you run a suite, test case, or test script. Reviewing the results of tests in the Test Log window reveals whether each test passed or failed.

The test log viewer, by default opens after a script or suite is run. But to open the test-log viewer manually go to File > Open Test log. In the Results tab of the Test Asset Workspace, expand the Build tree and select a log.

A Practitioner's Guide to Software Testing

Page 152 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Double click on the log you want open.

The Test Log Main Window

The Test Log window of Test Manager contains the Test Log Summary area, the Test Case Results tab, and the Details tab.

The Test Log Viewer window looks like this:

Expand to verification points, by right-clicking on the test script and choosing ‘Expand all’.

As we can see from the above test-run, I’ve got a region image verification point that has failed. We can view the failure by double-clicking on the red-flag. We will receive a window like this:

A Practitioner's Guide to Software Testing

Page 153 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

The areas highlighted in red tell us the differences between the baseline and actual images. We’ll talk more about verification points in the Robot seminar. However you can look at the details of the verification point, by highlighting the VP , right-click and choosing properties. You will be faced with a dialog, like the one below.

You can also view the test script, that actually holds the verification point, by right-clicking on the verification point flag.

Submitting Defects

You can submit automated test defects by right-clicking on the failed verification point and choosing ‘Submit Defect.’

You will be presented with the below window

A Practitioner's Guide to Software Testing

Page 154 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

TestManager opens the TestStudio defect form and fills in several fields for you with information from the log.(When you enter defects this way, TestManager does not actually start ClearQuest; it opens the defect form, which is part of ClearQuest.) You an also enter defects manually using ClearQuest, but none of the fields will be automatically filled in for you. Once you have entered defects, you can use ClearQuest to review the data and decide upon further action. (We will talk more about ClearQuest in a separate seminar)

Reporting Results

TestManager provides you with a set of standard reports that you can use to analyze test case results. You can use queries to narrow down the data displayed in a report

Test Manager provides three types of reports to help you in your testing efforts, these being

1) Test case distribution, 2) Test case results distribution, and 3) Test case trend reports.

Test case distribution reports help you track the progress of your planning, implementation, and execution of test cases. You can run a report to find out who is testing a particular component or what percentage of test cases have been executed.

These reports have multiple display formats including pie, bar, line and tree charts.

Test Case Results Distribution reports provide information about the quality of a specific build and the progress of your ability to test that build. These reports tell you the number of test cases that have passed results, failed results, warnings, and informational results, and that were stopped or completed for a specific build. They can also tell you the number of test cases implemented, test cases executed, and the test case instances executed.(Instances of test cases can be added to performance testing suites.

Test Case Trend reports provide information about the number of test inputs, and test cases that have been planned, developed, executed, or met the testing criteria over several builds, iterations, or dates.

A Practitioner's Guide to Software Testing

Page 155 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Rational Administrator: Use to create and manage rational repositories, which store your testing information. Managing the rational repositories with the Administrator The Rational Administrator is the component that you use to create and manage rational repositories. The rational repository is the component for storing application testing information such as scripts, verification points, queries and defects. Each repository consists of a database and several directions of files. All rational test components on your computer update and retrieve data from the same active repository. Within the repository, information is categorized by projects. Projects help you organize your testing information and resources for easy tracking. Repositories and projects are created in the Rational Administrator, usually by someone with administrator privileges. Use the Administrator to: Create and delete a repository Rational Log View: Use to view and analyze test result. Rational Site Check: Use to manage Internet and intranet web sites

A Practitioner's Guide to Software Testing

Page 156 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

WINRUNNER [AT] Win Runner, Mercury Interactive’s enterprise functional testing tool for Microsoft Windows applications. Recent advances in client/server software tools enable developers to build applications quickly and with increased functionality. Quality Assurance departments must cope with software that has dramatically improved, but is increasingly complex to test. Each code change, enhancement, defect fix, or platform port necessitates retesting the entire application to ensure a quality release. Manual testing can no longer keep pace in this dynamic development environment.

WinRunner helps you automate the testing process, from test development to execution. You create adaptable and reusable test scripts that challenge the functionality of your application. Prior to a software release, you can run these tests in a single overnight run—enabling you to detect defects and ensure superior software quality.

WinRunner Testing Modes

WinRunner facilitates easy test creation by recording how you work on your application. As you point and click GUI (Graphical User Interface) objects in your application, WinRunner generates a test script in the C-like Test Script Language (TSL). You can further enhance your test scripts with manual programming. WinRunner includes the Function Generator, which helps you quickly and easily add functions to your recorded tests. WinRunner includes two modes for recording tests:

Context Sensitive

Context Sensitive mode records your actions on the application being tested in terms of the GUI objects you select (such as windows, lists, and buttons), while ignoring the physical location of the object on the screen. Every time you perform an operation on the application being tested, a TSL statement describing the object selected and the action performed is generated in the test script. As you record, WinRunner writes a unique description of each selected object to a GUI map. The GUI map consists of files maintained separately from your test scripts. If the user interface of your application changes, you have to update only the GUI map, instead of hundreds of tests. This allows you to easily reuse your Context Sensitive test scripts on future versions of your application. To run a test, you simply play back the test script. WinRunner emulates a user by moving the mouse pointer over your application, selecting objects, and entering keyboard input. WinRunner reads the object descriptions in the GUI map and then searches in the application being tested for objects matching these descriptions. It can locate objects in a window even if their placement has changed. Analog Analog mode records mouse clicks, keyboard input, and the exact x- and y-coordinates traveled by the mouse. When the test is run, WinRunner retraces the mouse tracks. Use Analog mode when exact mouse coordinates are important to your test, such as when testing a drawing application.

A Practitioner's Guide to Software Testing

Page 157 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

The WinRunner Testing Process

Testing with WinRunner involves six main stages:

Create GUI Map

Create Tests

Debug Tests

Run Tests

View Results

Report Defects Create the GUI Map

The first stage is to create the GUI map so WinRunner can recognize the GUI objects in the application being tested. Use the Rapid test Script wizard to review the user interface of your application and systematically add descriptions of every GUI object to the GUI map. Alternatively, you can add descriptions of individual objects to the GUI map by clicking objects while recording a test. Note that when you work in GUI Map per Test mode, you can skip this step. For additional information,

Create Tests

Next, you create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested. You can insert checkpoints that check GUI objects, bitmaps, and databases. During this process, WinRunner captures data and saves it as expected results—the expected response of the application being tested. Note: If you are working with WinRunner Runtime, you cannot create a test or modify a test script.

Debug Tests

You run tests in Debug mode to make sure they run smoothly. You can set breakpoints, monitor variables, and control how tests are run to identify and isolate defects. Test results are saved in the debug folder, which you can discard once you’ve finished debugging the test.

Run Tests

You run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

View Results

You determine the success or failure of the tests. Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. In cases of bitmap mismatches, you can also view a bitmap that displays only the difference between the expected and actual results.

A Practitioner's Guide to Software Testing

Page 158 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Report Defects

If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed. WinRunner as Q & A

1) How you used WinRunner in your project? a. Yes, I have been WinRunner for creating automates scripts for GUI, functional and regression

testing of the AUT. b.

2) Explain WinRunner testing process? a. WinRunner testing process involves six main stages

i. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested

ii. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.

iii. Debug Test: run tests in Debug mode to make sure they run smoothly iv. Run Tests: run tests in Verify mode to test your application. v. View Results: determines the success or failure of the tests.

vi. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

3) What in contained in the GUI map?

a. WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.

b. There are 2 types of GUI Map files. i. Global GUI Map file: a single GUI Map file for the entire application

ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

4) How does WinRunner recognize objects on the application?

a. WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

5) Have you created test scripts and what is contained in the test scripts?

a. Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.

A Practitioner's Guide to Software Testing

Page 159 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

6) How does WinRunner evaluates test results? a. Following each test run, WinRunner displays the results in a report. The report details all the

major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

7) Have you performed debugging of the scripts?

a. Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

8) How do you run your test scripts?

a. We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches

are found, WinRunner captures them as actual results.

9) How do you analyze results and report the defects? a. Following each test run, WinRunner displays the results in a report. The report details all the

major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

10) What is the use of Test Director software?

a. TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.

11) How you integrated your automated scripts from TestDirector?

a. When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT.

12) What are the different modes of recording?

a. There are two type of recording in WinRunner. i. Context Sensitive recording records the operations you perform on your application by

identifying Graphical User Interface (GUI) objects. ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-

coordinates traveled by the mouse pointer across the screen.

A Practitioner's Guide to Software Testing

Page 160 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

13) What is the purpose of loading WinRunner Add-Ins? a. Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory.

While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function.

14) What are the reasons that WinRunner fails to identify an object on the GUI?

a. WinRunner fails to identify an object in a GUI due to various reasons. i. The object is not a standard windows object.

ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.

15) What do you mean by the logical name of the object.

a. An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.

16) If the object does not have a name then what will be the logical name?

a. If the object does not have a name then the logical name could be the attached text.

17) What is the different between GUI map and GUI map files? a. The GUI map is actually the sum of one or more GUI map files. There are two modes for

organizing GUI map files. i. Global GUI Map file: a single GUI Map file for the entire application

ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

b. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.

18) How do you view the contents of the GUI map?

a. GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

19) When you create GUI map do you record all the objects of specific objects?

a. If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

20) What is the purpose of set_window command?

a. Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.

Syntax: set_window(<logical name>, time); The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.

21) How do you load GUI map? a. We can load a GUI Map by using the GUI_load command. Syntax: GUI_load(<file_name>);

A Practitioner's Guide to Software Testing

Page 161 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

22) What is the disadvantage of loading the GUI maps through start up scripts? a. If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map

may be much high. b. If there is any change in the object being learned then WinRunner will not be able to recognize

the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.

23) How do you unload the GUI map?

a. We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.

Syntax: GUI_close(<file_name>); or GUI_close_all;

24) What actually happens when you load GUI map? a. When we load a GUI Map file, the information about the windows and the objects with their

logical names and physical description are loaded into memory. So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory.

25) What is the purpose of the temp GUI map file?

a. While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.

26) What is the extension of gui map file?

a. The extension for a GUI Map file is “.gui”.

27) How do you find an object in an GUI map. a. The GUI Map Editor is been provided with a Find and Show Buttons.

i. To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

ii. To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

28) What different actions are performed by find and show button?

a. To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

b. To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

29) How do you identify which files are loaded in the GUI map?

a. The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the memory.

30) How do you modify the logical name or the physical description of the objects in GUI map?

a. You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.

A Practitioner's Guide to Software Testing

Page 162 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

31) When do you feel you need to modify the logical name?

a. Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.

32) When it is appropriate to change physical description?

a. Changing the physical description is necessary when the property value of an object changes.

33) How WinRunner handles varying window labels? a. We can handle varying window labels using regular expressions. WinRunner uses two “hidden”

properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.

i. The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.

ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

34) What is the purpose of regexp_label property and regexp_MSW_class property?

a. The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.

b. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

35) How do you suppress a regular expression?

a. We can suppress the regular expression of a window by replacing the regexp_label property with label property.

36) How do you copy and move objects between different GUI map files?

a. We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:

i. Choose Tools > GUI Map Editor to open the GUI Map Editor. ii. Choose View > GUI Files.

iii. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.

iv. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.

v. In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.

vi. Click Copy or Move. vii. To restore the GUI Map Editor to its original size, click Collapse.

37) How do you select multiple objects during merging the files?

a. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.

38) How do you clear a GUI map files?

a. We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.

A Practitioner's Guide to Software Testing

Page 163 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

39) How do you filter the objects in the GUI map? a. GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.

i. Logical name displays only objects with the specified logical name. ii. Physical description displays only objects matching the specified physical description.

Use any substring belonging to the physical description. iii. Class displays only objects of the specified class, such as all the push buttons.

40) How do you configure GUI map?

a. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.

b. Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic “object” class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.

c. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

41) What is the purpose of GUI map configuration?

a. GUI Map configuration is used to map a custom object to a standard object.

42) How do you make the configuration and mappings permanent? a. The mapping and the configuration you set are valid only for the current WinRunner session. To

make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

43) What is the purpose of GUI spy?

a. Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.

44) What is the purpose of obligatory and optional properties of the objects?

a. For each class, WinRunner learns a set of default properties. Each default property is classified “obligatory” or “optional”.

i. An obligatory property is always learned (if it exists). ii. An optional property is used only if the obligatory properties do not provide unique

identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.

45) When the optional properties are learned?

a. An optional property is used only if the obligatory properties do not provide unique identification of an object.

A Practitioner's Guide to Software Testing

Page 164 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

46) What is the purpose of location indicator and index indicator in GUI map configuration? a. In cases where the obligatory and optional properties do not uniquely identify an object,

WinRunner uses a selector to differentiate between them. Two types of selectors are available: i. A location selector uses the spatial position of objects.

1. The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.

ii. An index selector uses a unique number to identify the object in a window. 1. The index selector uses numbers assigned at the time of creation of objects to

identify the object in a window. Use this selector if the location of objects with the same description may change within a window.

47) How do you handle custom objects?

a. A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic “object” class. WinRunner records operations on custom objects using obj_mouse_ statements.

b. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing.

48) What is the name of custom class in WinRunner and what methods it applies on the custom

objects? a. WinRunner learns custom class objects under the generic “object” class. WinRunner records

operations on custom objects using obj_ statements.

49) In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?

a. In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:

i. A location selector uses the spatial position of objects. ii. An index selector uses a unique number to identify the object in a window.

50) What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.

a. Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)

b. Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.

c. As Object instructs WinRunner to record all operations performed on a GUI object as though its class were “object” class.

d. Ignore instructs WinRunner to disregard all operations performed on the class.

51) How do you find out which is the start up file in WinRunner? a. The test script name in the Startup Test box in the Environment tab in the General Options dialog

box is the start up file in WinRunner.

A Practitioner's Guide to Software Testing

Page 165 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

52) What are the virtual objects and how do you learn them? a. Applications may contain bitmaps that look and behave like GUI objects. WinRunner records

operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests.

b. Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.

To define a virtual object using the Virtual Object wizard: i. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.

ii. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next.

iii. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.

iv. Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc.

v. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming your choice. Click Next.

53) How you created you test scripts 1) by recording or 2) programming?

a. Programming. I have done complete programming only, absolutely no recording.

54) What are the two modes of recording? a. There are 2 modes of recording in WinRunner

i. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.

ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

55) What is a checkpoint and what are different types of checkpoints?

a. Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version.

You can add four types of checkpoints to your test scripts: i. GUI checkpoints verify information about GUI objects. For example, you can check that

a button is enabled or see which item is selected in a list. ii. Bitmap checkpoints take a “snapshot” of a window or area of your application and

compare this to an image captured in an earlier version. iii. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their

contents. iv. Database checkpoints check the contents and the number of rows and columns of a

result set, which is based on a query you create on your database.

A Practitioner's Guide to Software Testing

Page 166 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

56) What are data driven tests? a. When you test your application, you may want to check how it performs the same operations with

multiple sets of data. You can create a data-driven test with a loop that runs ten times: each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table. You can perform these operations manually, or you can use the DataDriver Wizard to parameterize your test and store the data in a data table.

57) What are the synchronization points?

a. Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen.

b. For Analog testing, you can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When you run a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window.

58) What is parameterizing?

a. In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table.

59) How do you maintain the document information of the test scripts?

a. Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.

60) What do you verify with the GUI checkpoint for single property and what command it generates,

explain syntax? a. You can check a single property of a GUI object. For example, you can check whether a button is

enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:

i. button_check_info ii. scroll_check_info

iii. edit_check_info iv. static_check_info v. list_check_info

vi. win_check_info vii. obj_check_info

Syntax: button_check_info (button, property, property_value ); edit_check_info ( edit, property, property_value );

A Practitioner's Guide to Software Testing

Page 167 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

61) What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?

a. You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check.

b. Creating a GUI Checkpoint using the Default Checks i. You can create a GUI checkpoint that performs a default check on the property

recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.

ii. To create a GUI checkpoint using default checks: 1. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI

Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

2. Click an object. 3. WinRunner captures the current value of the property of the GUI object being

checked and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement

Syntax: win_check_gui ( window, checklist, expected_results_file, time ); c. Creating a GUI Checkpoint by Specifying which Properties to Check d. You can specify which properties to check for an object. For example, if you create a checkpoint

that checks a push button, you can choose to verify that it is in focus, instead of enabled. e. To create a GUI checkpoint by specifying which properties to check:

i. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

ii. Double-click the object or window. The Check GUI dialog box opens. iii. Click an object name in the Objects pane. The Properties pane lists all the properties for

the selected object. iv. Select the properties you want to check.

1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

3. To change the viewing options for the properties of an object, use the Show Properties buttons.

A Practitioner's Guide to Software Testing

Page 168 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement.

Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time );

62) What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?

a. To create a GUI checkpoint for two or more objects: i. Choose Create > GUI Checkpoint > For Multiple Objects or click the GUI Checkpoint

for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.

ii. Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.

iii. To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.

iv. The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.

v. Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens.

vi. The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.

1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

3. To change the viewing options for the properties of an object, use the Show Properties buttons.

vii. To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.

Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time );

63) What information is contained in the checklist file and in which file expected results are stored? a. The checklist file contains information about the objects and the properties of the object we are

verifying. b. The gui*.chk file contains the expected results which is stored in the exp folder

A Practitioner's Guide to Software Testing

Page 169 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

64) What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?

a. You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy.

b. When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.

c. Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap.

d. To capture a window or object as a bitmap: i. Choose Create > Bitmap Checkpoint > For Object/Window or click the Bitmap

Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.

ii. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax:

win_check_bitmap ( object, bitmap, time ); iii. For an object bitmap, the syntax is:

obj_check_bitmap ( object, bitmap, time ); iv. For example, when you click the title bar of the main window of the Flight Reservation

application, the resulting statement might be: win_check_bitmap ("Flight Reservation", "Img2", 1);

v. However, if you click the Date of Flight box in the same window, the statement might be:

obj_check_bitmap ("Date of Flight:", "Img1", 1); Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );

65) What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?

a. You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window).

b. To capture an area of the screen as a bitmap: i. Choose Create > Bitmap Checkpoint > For Screen Area or click the Bitmap Checkpoint

for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.

A Practitioner's Guide to Software Testing

Page 170 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

ii. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.

iii. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script.

iv. The win_check_bitmap statement for an area of the screen has the following syntax: win_check_bitmap ( window, bitmap, time, x, y, width, height );

66) What do you verify with the database checkpoint default and what command it generates, explain syntax?

a. By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application.

b. When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query.

c. You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you

d. specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found.

e. You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run. Syntax: db_check(<checklist_file>, <expected_restult>);

f. You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script. Syntax: db_record_check(ChecklistFileName,SuccessConditions,RecordNumber ); ChecklistFileName A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard. SuccessConditions Contains one of the following values:

1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found.

2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found.

3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found.

RecordNumber An out parameter returning the number of records in the database.

A Practitioner's Guide to Software Testing

Page 171 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

67) How do you handle dynamically changing area of the window in the bitmap checkpoints? a. The difference between bitmaps option in the Run Tab of the general options defines the

minimum number of pixels that constitute a bitmap mismatch

68) What do you verify with the database check point custom and what command it generates, explain syntax?

a. When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set.

b. You can create a custom check on a database in order to: i. check the contents of part or the entire result set

ii. edit the expected results of the contents of the result set iii. count the rows in the result set iv. count the columns in the result set

c. You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.

69) What do you verify with the sync point for object/window property and what command it generates, explain syntax?

a. Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.

b. You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object.

c. You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax:

Syntax: obj_exists ( object [, time ] ); win_exists ( window [, time ] );

70) What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?

a. You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.

b. During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.

Syntax: obj_wait_bitmap ( object, image, time ); win_wait_bitmap ( window, image, time );

71) What do you verify with the sync point for screen area and what command it generates, explain syntax?

a. For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution

Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);

A Practitioner's Guide to Software Testing

Page 172 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

72) How do you edit checklist file and when do you need to edit the checklist file? a. WinRunner has an edit checklist file option under the create menu. Select the “Edit GUI

Checklist” to modify GUI checklist file and “Edit Database Checklist” to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.

73) How do you edit the expected value of an object?

a. We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.

74) How do you modify the expected results of a GUI checkpoint?

a. We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.

75) How do you handle ActiveX and Visual basic objects?

a. WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.

76) How do you create ODBC query?

a. We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.

77) How do you record a data driven test?

a. We can create a data-driven testing using data from a flat file, data table or a database. i. Using Flat File: we actually store the data to be used in a required format in the file. We

access the file using the File manipulation commands, reads data from the file and assign the variables with data.

ii. Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.

iii. Database: we store test data in the database and access these data using ‘db_*’ functions.

78) How do you convert a database file to a text file?

a. You can use Data Junction to create a conversion file which converts a database to a target text file.

79) How do you parameterize database check points?

a. When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.

A Practitioner's Guide to Software Testing

Page 173 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

80) How do you create parameterize SQL commands? a. A parameterized query is a query in which at least one of the fields of the WHERE clause is

parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:

i. SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)

SELECT defines the columns to include in the query. FROM specifies the path of the database.

WHERE (optional) specifies the conditions, or filters to use in the query. Departure is the parameter that represents the departure point of a flight. Day_Of_Week is the parameter that represents the day of the week of a flight.

b. When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:

db_check("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params); The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.

81) Explain the following commands: a. db_connect

i. to connect to a database db_connect(<session_name>, <connection_string>);

b. db_execute_query i. to execute a query

db_execute_query ( session_name, SQL, record_number ); record_number is the out value.

c. db_get_field_value i. returns the value of a single field in the specified row_index and column in the

session_name database session. db_get_field_value ( session_name, row_index, column );

d. db_get_headers i. returns the number of column headers in a query and the content of the column headers,

concatenated and delimited by tabs. db_get_headers ( session_name, header_count, header_content );

e. db_get_row i. returns the content of the row, concatenated and delimited by tabs.

db_get_row ( session_name, row_index, row_content ); f. db_write_records

i. writes the record set into a text file delimited by tabs. db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] );

g. db_get_last_error i. returns the last error message of the last ODBC or Data Junction operation in the

session_name database session. db_get_last_error ( session_name, error );

h. db_disconnect i. disconnects from the database and ends the database session.

db_disconnect ( session_name ); i. db_dj_convert

A Practitioner's Guide to Software Testing

Page 174 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

i. runs the djs_file Data Junction export file. When you run this file, the Data Junction Engine converts data from one spoke (source) to another (target). The optional parameters enable you to override the settings in the Data Junction export file.

db_dj_convert ( djs_file [ , output_file [ , headers [ , record_limit ] ] ] );

82) What check points you will use to read and check text on the GUI and explain its syntax? a. You can use text checkpoints in your test scripts to read and check text in GUI objects and in

areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.

b. You can use a text checkpoint to: i. Read text from a GUI object or window in your application, using obj_get_text and

win_get_text ii. Search for text in an object or window, using win_find_text and obj_find_text

iii. Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text

iv. Click on text in an object or window, using obj_click_on_text and win_click_on_text

83) Explain Get Text checkpoint from object/window with syntax? a. We use obj_get_text (<logical_name>, <out_text>) function to get the text from an object b. We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

84) Explain Get Text checkpoint from screen area with syntax? a. We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

85) Explain Get Text checkpoint from selection (web only) with syntax? a. Returns a text string from an object.

web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]); i. object The logical name of the object.

ii. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the # character.

iii. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the # character.

iv. out_text The output variable that stores the text string. v. text_before Defines the start of the search area for a particular text string.

vi. text_after Defines the end of the search area for a particular text string. vii. index The occurrence number to locate. (The default parameter number is numbered 1).

86) Explain Get Text checkpoint web text checkpoint with syntax?

a. We use web_obj_text_exists function for web text checkpoints. web_obj_text_exists ( object, table_row, table_column, text_to_find [, text_before, text_after] ); a. object The logical name of the object to search. b. table_row If the object is a table, it specifies the location of the row within a table. The

string is preceded by the character #. c. table_column If the object is a table, it specifies the location of the column within a table. The

string is preceded by the character #. d. text_to_find The string that is searched for. e. text_before Defines the start of the search area for a particular text string. f. text_after Defines the end of the search area for a particular text string.

A Practitioner's Guide to Software Testing

Page 175 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

87) Which TSL functions you will use for a. Searching text on the window

i. find_text ( string, out_coord_array, search_area [, string_def ] ); string The string that is searched for. The string must be complete, contain no spaces, and it must be preceded and followed by a space outside the quotation marks. To specify a literal, case-sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. In this case, the string variable can include a regular expression. out_coord_array The name of the array that stores the screen coordinates of the text (see explanation below). search_area The area to search, specified as coordinates x1,y1,x2,y2. These define any two diagonal corners of a rectangle. The interpreter searches for the text in the area defined by the rectangle. string_def Defines the type of search to perform. If no value is specified, (0 or FALSE, the default), the search is for a single complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word.

b. getting the location of the text string i. win_find_text ( window, string, result_array [, search_area [, string_def ] ] );

window The logical name of the window to search. string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character. For more information regarding Regular Expressions, refer to the "Using Regular Expressions" chapter in your User's Guide. result_array The name of the output variable that stores the location of the string as a four-element array. search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1,y1,x2,y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window is considered the search area. string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

c. Moving the pointer to that text string i. win_move_locator_text (window, string [ ,search_area [ ,string_def ] ] );

window The logical name of the window. string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression (the regular expression need not begin with an exclamation mark). search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1, y1, x2, y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window specified is considered the search area. string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

d. Comparing the text i. compare_text (str1, str2 [, chars1, chars2]);

A Practitioner's Guide to Software Testing

Page 176 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

str1, str2 The two strings to be compared. chars1 One or more characters in the first string. chars2 One or more characters in the second string. These characters are substituted for those in chars1.

88) What are the steps of creating a data driven test? a. The steps involved in data driven testing are:

i. Creating a test ii. Converting to a data-driven test and preparing a database

iii. Running the test iv. Analyzing the test results.

89) Record a data driven test script using data driver wizard?

a. You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data.

To create a data-driven test: i. If you want to turn only part of your test script into a data-driven test, first select those

lines in the test script. ii. Choose Tools > DataDriver Wizard.

iii. If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test, click Next.

iv. The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use

v. The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.

vi. In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, “table.”

vii. At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table

viii. To the script at a later time without making changes throughout the script. ix. Choose from among the following options:

1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements

2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually.

3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable.

4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement.

A Practitioner's Guide to Software Testing

Page 177 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package.

6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data.

7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table.

x. The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace.

Choose whether and how to replace the selected data: 1. Do not replace this data: Does not parameterize this data. 2. An existing column: If parameters already exist in the data table for this test,

select an existing parameter from the list. 3. A new column: Creates a new column for this parameter in the data table for this

test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name.

xi. The final screen of the wizard opens. 1. If you want the data table to open after you close the wizard, select Show data

table now. 2. To perform the tasks specified in previous screens and close the wizard, click

Finish. 3. To close the wizard without making any changes to the test script, click Cancel.

90) What are the three modes of running the scripts? a. WinRunner provides three modes in which to run tests—Verify, Debug, and Update. You use

each mode during a different phase of the testing process. i. Verify

1. Use the Verify mode to check your application. ii. Debug

1. Use the Debug mode to help you identify bugs in a test script. iii. Update

1. Use the Update mode to update the expected results of a test or to create a new expected results folder.

A Practitioner's Guide to Software Testing

Page 178 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

91) Explain the following TSL functions:

a. Ddt_open i. Creates or opens a datatable file so that WinRunner can access it.

Syntax: ddt_open ( data_table_name, mode ); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. mode The mode for opening the data table: DDT_MODE_READ (read-only) or DDT_MODE_READWRITE (read or write).

b. Ddt_save

i. Saves the information into a data file. Syntax: dt_save (data_table_name); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

c. Ddt_close i. Closes a data table file

Syntax: ddt_close ( data_table_name ); data_table_name The name of the data table. The data table is a Microsoft Excel file or a tabbed text file. The first row in the file contains the names of the parameters.

d. Ddt_export i. Exports the information of one data table file into a different data table file.

Syntax: ddt_export (data_table_namename1, data_table_namename2); data_table_namename1 The source data table filename. data_table_namename2 The destination data table filename.

e. Ddt_show i. Shows or hides the table editor of a specified data table.

Syntax: ddt_show (data_table_name [, show_flag]); data_table_name The name of the data table. The name may be the table variable name, the Microsoft

Excel file or a tabbed text file name, or the full path and file name of the table.

show_flag The value indicating whether the editor should be shown (default=1) or hidden (0).

f. Ddt_get_row_count i. Retrieves the no. of rows in a data tables

Syntax: ddt_get_row_count (data_table_name, out_rows_count); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. out_rows_count The output variable that stores the total number of rows in the data table.

g. ddt_next_row i. Changes the active row in a database to the next row

Syntax: ddt_next_row (data_table_name); data_table_name The name of the data table. The name may be the table variable name, the Microsoft

Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file

contains the names of the parameters.

A Practitioner's Guide to Software Testing

Page 179 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

h. ddt_set_row i. Sets the active row in a data table.

Syntax: ddt_set_row (data_table_name, row); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. row The new active row in the data table.

i. ddt_set_val i. Sets a value in the current row of the data table

Syntax: ddt_set_val (data_table_name, parameter, value); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. parameter The name of the column into which the value will be inserted. value The value to be written into the table.

j. ddt_set_val_by_row i. Sets a value in a specified row of the data table.

Syntax: ddt_set_val_by_row (data_table_name, row, parameter, value); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. row The row number in the table. It can be any existing row or the current row number plus 1, which will add a new row to the data table. parameter The name of the column into which the value will be inserted. value The value to be written into the table.

k. ddt_get_current_row i. Retrieves the active row of a data table.

Syntax: ddt_get_current_row ( data_table_name, out_row ); data_table_name The name of the data table. The name may be the table variable name, the Microsoft

Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file

contains the names of the parameters. This row is labeled row 0.

out_row The output variable that stores the active row in the data table. l. ddt_is_parameter

i. Returns whether a parameter in a datatable is valid Syntax: ddt_is_parameter (data_table_name, parameter); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The parameter name to check in the data table.

m. ddt_get_parameters i. Returns a list of all parameters in a data table.

Syntax: ddt_get_parameters ( table, params_list, params_num ); table The pathname of the data table. params_list This out parameter returns the list of all parameters in the data table, separated by tabs. params_num This out parameter returns the number of parameters in params_list.

A Practitioner's Guide to Software Testing

Page 180 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

n. ddt_val i. Returns the value of a parameter in the active roe in a data table.

Syntax: ddt_val (data_table_name, parameter); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. parameter The name of the parameter in the data table.

o. ddt_val_by_row i. Returns the value of a parameter in the specified row in a data table.

Syntax: ddt_val_by_row ( data_table_name, row_number, parameter ); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. row_number The number of the row in the data table. parameter The name of the parameter in the data table.

p. ddt_report_row i. Reports the active row in a data table to the test results

Syntax: ddt_report_row (data_table_name); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

q. ddt_update_from_db i. imports data from a database into a data table. It is inserted into your test script when you

select the Import data from a database option in the DataDriver Wizard. When you run your test, this function updates the data table with data from the database.

92) How do you handle unexpected events and errors?

a. WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.

WinRunner enables you to handle the following types of exceptions: Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window. TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code. Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object. Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run.

Define Exception Handling

Define Exception

Define Handler Function

Activate Exception Handling

A Practitioner's Guide to Software Testing

Page 181 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

93) How do you handle pop-up exceptions? a. A pop-up exception Handler handles the pop-up messages that come up during the execution of

the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be

i. Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box.

ii. User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.

94) How do you handle TSL exceptions?

a. A TSL exception enables you to detect and respond to a specific error code returned during test execution.

b. Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch.

c. The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs.

d. Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.

95) How do you handle object exceptions?

a. During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results.

b. You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution

96) How do you comment your script?

a. We comment a script or line of script by inserting a ‘#’ at the beginning of the line.

97) What is a compile module? a. A compiled module is a script containing a library of user-defined functions that you want to call

frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test.

b. Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

98) What is the difference between script and compile module?

a. Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.

b. WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of “Compiled Module”.

c. By default, modules containing TSL code have a property value of "main". Main modules are called for execution from within other modules. Main modules are dynamically compiled into

A Practitioner's Guide to Software Testing

Page 182 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

machine code only when WinRunner recognizes a "call" statement. Example of a call for the "app_init" script: call cso_init(); call( "C:\\MyAppFolder\\" & "app_init" );

d. Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement: reload (“C:\\MyAppFolder\\" & "flt_lib"); or

load ("C:\\MyAppFolder\\" & "flt_lib");

99) Write and explain various loop command? a. A for loop instructs WinRunner to execute one or more statements a specified number of times.

It has the following syntax: for ( [ expression1 ]; [ expression2 ]; [ expression3 ] ) statement

i. First, expression1 is executed. Next, expression2 is evaluated. If expression2 is true, statement is executed and expression3 is executed. The cycle is repeated as long as expression2 remains true. If expression2 is false, the for statement terminates and execution passes to the first statement immediately following.

ii. For example, the for loop below selects the file UI_TEST from the File Name list iii. in the Open window. It selects this file five times and then stops.

set_window ("Open") for (i=0; i<5; i++)

list_select_item("File_Name:_1","UI_TEST"); #Item Number2 b. A while loop executes a block of statements for as long as a specified condition is true.

It has the following syntax: while ( expression )

statement ; i. While expression is true, the statement is executed. The loop ends

when the expression is false. For example, the while statement below performs the same function as the for loop above.

set_window ("Open"); i=0; while (i<5){

i++; list_select_item ("File Name:_1", "UI_TEST"); # Item Number 2

} c. A do/while loop executes a block of statements for as long as a specified condition is true. Unlike

the for loop and while loop, a do/while loop tests the conditions at the end of the loop, not at the beginning. A do/while loop has the following syntax: do

statement while (expression);

i. The statement is executed and then the expression is evaluated. If the expression is true, then the cycle is repeated. If the expression is false, the cycle is not repeated.

ii. For example, the do/while statement below opens and closes the Order dialog box of Flight Reservation five times. set_window ("Flight Reservation");

A Practitioner's Guide to Software Testing

Page 183 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

i=0; do {

menu_select_item ("File;Open Order..."); set_window ("Open Order"); button_press ("Cancel"); i++;

} while (i<5);

100) Write and explain decision making command? a. You can incorporate decision-making into your test scripts using if/else or switch statements.

i. An if/else statement executes a statement if a condition is true; otherwise, it executes another statement.

It has the following syntax: if ( expression ) statement1; [ else statement2; ]

expression is evaluated. If expression is true, statement1 is executed. If expression1 is false, statement2 is executed.

b. A switch statement enables WinRunner to make a decision based on an expression that can have more than two values.

It has the following syntax: switch (expression ) { case case_1: statements case case_2: statements case case_n: statements default: statement(s) } The switch statement consecutively evaluates each case expression until one is found that equals the initial expression. If no case is equal to the expression, then the default statements are executed. The default statements are optional.

101) Write and explain switch command? a. A switch statement enables WinRunner to make a decision based on an expression that can have

more than two values. It has the following syntax: switch (expression ) { case case_1: statements case case_2: statements case case_n: statements default: statement(s) } b. The switch statement consecutively evaluates each case expression until one is found that equals

the initial expression. If no case is equal to the expression, then the default statements are executed. The default statements are optional.

A Practitioner's Guide to Software Testing

Page 184 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

102) How do you write messages to the report? a. To write message to a report we use the report_msg statement

Syntax: report_msg (message);

103) What is a command to invoke application? a. Invoke_application is the function used to invoke an application.

Syntax: invoke_application(file, command_option, working_dir, SHOW);

104) What is the purpose of tl_step command? a. Used to determine whether sections of a test pass or fail.

Syntax: tl_step(step_name, status, description);

105) Which TSL function you will use to compare two files? a. We can compare 2 files in WinRunner using the file_compare function.

Syntax: file_compare (file1, file2 [, save file]);

106) What is the use of function generator? a. The Function Generator provides a quick, error-free way to program scripts. You can:

i. Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested.

ii. Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report.

iii. Add Customization functions that enable you to modify WinRunner to suit your testing environment.

107) What is the use of putting call and call_close statements in the test script? a. You can use two types of call statements to invoke one test from another:

i. A call statement invokes a test from within another test. ii. A call_close statement invokes a test from within a script and closes the test when the test

is completed. iii. The call statement has the following syntax:

1. call test_name ( [ parameter1, parameter2, ...parametern ] ); iv. The call_close statement has the following syntax:

1. call_close test_name ( [ parameter1, parameter2, ... parametern ] ); v. The test_name is the name of the test to invoke. The parameters are the parameters

defined for the called test. vi. The parameters are optional. However, when one test calls another, the call statement

should designate a value for each parameter defined for the called test. If no parameters are defined for the called test, the call statement must contain an empty set of parentheses.

108) What is the use of treturn and texit statements in the test script?

a. The treturn and texit statements are used to stop execution of called tests. i. The treturn statement stops the current test and returns control to the calling test.

ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test.

b. Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0.

treturn c. The treturn statement terminates execution of the called test and returns control to the calling test. The syntax is:

A Practitioner's Guide to Software Testing

Page 185 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

treturn [( expression )]; d. The optional expression is the value returned to the call statement used to invoke the test. texit e. When tests are run interactively, the texit statement discontinues test execution. However, when

tests are called from a batch test, texit ends execution of the current test only; control is then returned to the calling batch test.

The syntax is: texit [( expression )];

109) Where do you set up the search path for a called test. a. The search path determines the directories that WinRunner will search for a called test. b. To set the search path, choose Settings > General Options. The General Options dialog box

opens. Click the Folders tab and choose a search path in the Search Path for Called Tests box. WinRunner searches the directories in the order in which they are listed in the box. Note that the search paths you define remain active in future testing sessions.

c. 110) How you create user-defined functions and explain the syntax?

a. A user-defined function has the following structure: [class] function name ([mode] parameter...) { declarations; statements; }

b. The class of a function can be either static or public. A static function is available only to the test or module within which the function was defined.

c. Parameters need not be explicitly declared. They can be of mode in, out, or inout. For all non-array parameters, the default mode is in. For array parameters, the default is inout. The significance of each of these parameter types is as follows:

in: A parameter that is assigned a value from outside the function. out: A parameter that is assigned a value from inside the function.

inout: A parameter that can be assigned a value from outside or inside the function.

111) What does static and public class of a function means? a. The class of a function can be either static or public. b. A static function is available only to the test or module within which the function was defined. c. Once you execute a public function, it is available to all tests, for as long as the test containing the

function remains open. This is convenient when you want the function to be accessible from called tests. However, if you want to create a function that will be available to many tests, you should place it in a compiled module. The functions in a compiled module are available for the duration of the testing session.

d. If no class is explicitly declared, the function is assigned the default class, public.

112) What does in, out and input parameters means? a. in: A parameter that is assigned a value from outside the function. b. out: A parameter that is assigned a value from inside the function. c. inout: A parameter that can be assigned a value from outside or inside the function.

A Practitioner's Guide to Software Testing

Page 186 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

113) What is the purpose of return statement? a. This statement passes control back to the calling function or test. It also returns the value of the

evaluated expression to the calling function or test. If no expression is assigned to the return statement, an empty string is returned. Syntax: return [( expression )];

114) What does auto, static, public and extern variables means? a. auto: An auto variable can be declared only within a function and is local to that function. It

exists only for as long as the function is running. A new copy of the variable is created each time the function is called.

b. static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed.

c. public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules.

d. extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.

115) How do you declare constants?

a. The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner.

b. The syntax of this declaration is: [class] const name [= expression];

116) How do you declare arrays? a. The following syntax is used to define the class and the initial expression of an array. Array size

need not be defined in TSL. b. class array_name [ ] [=init_expression] c. The array class may be any of the classes used for variable declarations (auto, static, public,

extern).

117) How do you load and unload a compile module? a. In order to access the functions in a compiled module you need to load the module. You can load

it from within any test script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the compiled module.

b. You can load a module either as a system module or as a user module. A system module is generally a closed module that is “invisible” to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload statement with no parameters (global unload).

load (module_name [,1|0] [,1|0] ); The module_name is the name of an existing compiled module.

Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.

(Default = 0) The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain open.

(Default = 0)

A Practitioner's Guide to Software Testing

Page 187 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

c. The unload function removes a loaded module or selected functions from memory. d. It has the following syntax: unload ( [ module_name | test_name [ , "function_name" ] ] );

118) Why you use reload function? a. If you make changes in a module, you should reload it. The reload function removes a loaded

module from memory and reloads it (combining the functions of unload and load). The syntax of the reload function is: reload ( module_name [ ,1|0 ] [ ,1|0 ] ); The module_name is the name of an existing compiled module.

Two additional optional parameters indicate the type of module. The first parameter indicates whether the module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.

(Default = 0) The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded. 1 indicates that the module will close automatically. 0 indicates that the module will remain open.

(Default = 0)

A Practitioner's Guide to Software Testing

Page 188 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

LOAD RUNNER 0

LoadRunner, Mercury Interactive’s tool for testing the performance of client/server systems. LoadRunner stresses your entire client/server system to isolate and identify potential client, network, and server bottlenecks.

LoadRunner enables you to test your system under controlled and peak load conditions. To generate

load,LoadRunner runs thousands of Virtual Users that are distributed over a network. Using a minimum of hardware resources, these Virtual Users provide consistent, repeatable, and measurable load to exercise your client/server system just as real users would.

LoadRunner’s in-depth reports and graphs provide the information that you need to evaluate the

performance of your client/server system. Client/Server Load Testing Modern client/server architectures are complex. While they provide an unprecedented degree of power

and flexibility, these systems are difficult to test. Whereas single-user testing focuses primarily on functionality and the user interface of a single application, client/server testing focuses on performance and reliability of an entire client/server system. For example, a typical client/server testing scenario might depict 1000 users that login simultaneously to a system on Monday morning: What is the response time of the system? Does the system crash? To be able to answer these questions—and more—a complete client/server performance testing solution must:

• test a system that combines a variety of software applications and hardware platforms • determine the suitability of a server for any given application • test the server before the necessary client software has been developed • emulate an environment where multiple clients interact with a single server application • test a client/server system under the load of tens, hundreds, or even thousands of potential users

Manual Testing Limitations

Traditional or manual testing methods offer only a partial solution to load testing. For example, you can test an entire system manually by constructing an environment where many users work simultaneously on the system. Each user works at a single machine and submits input to the system. However, this manual testing method has the following drawbacks:

• it is expensive, requiring large amounts of both personnel and machinery • it is complicated, especially coordinating and synchronizing multiple testers • it involves a high degree of organization, especially to record and analyze results meaningfully • the repeatability of the manual tests is limited

A Practitioner's Guide to Software Testing

Page 189 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

The LoadRunner Solution

The LoadRunner automated solution addresses the drawbacks of manual performance testing:

• LoadRunner reduces the personnel requirements by replacing human users with virtual users or Vusers. These Vusers emulate the behavior of real users—operating real applications.

• Because numerous Vusers can run on a single computer, LoadRunner reduces the hardware requirements.

• The LoadRunner Controller allows you to easily and effectively control all the Vusers—from a single point of control.

• LoadRunner monitors the client/server performance online, enabling you to fine-tune your system during test execution.

• LoadRunner automatically records the performance of the client/server system during a test. You can choose from a wide variety of graphs and reports to view the performance data.

• LoadRunner checks where performance delays occur: network or client delays, CPU performance, I/O delays, database locking, or other issues at the database server. LoadRunner monitors the network and server resources to help you improve performance.

• Because LoadRunner tests are fully automated, you can easily repeat them as often as you need.

Using LoadRunner Scenarios

Using LoadRunner, you divide your client/server performance testing requirements into scenarios. A scenario defines the events that occur during each testing session. Thus, for example, a scenario defines and controls the number of users to emulate, the actions that they perform, and the machines on which they run their emulations.

Vuser

In the scenario, LoadRunner replaces human users with virtual users or Vusers. When you run a scenario, Vusers emulate the actions of human users—submitting input to the server. While a workstation accommodates only a single human user, many Vusers can run concurrently on a single workstation. In fact, a scenario can contain tens, hundreds, or even thousands of Vusers.

To emulate conditions of heavy user load, you create a large number of Vusers that perform a series of tasks. For example, you can observe how a server behaves when one thousand Vusers simultaneously withdraw cash from the bank ATMs. To accomplish this, you create 100 Vusers, and each Vuser:

1 enters an account number into an ATM 2 enters the amount of cash to withdraw 3 withdraws cash from the account 4 checks the balance of the account 5 repeats the process numerous times Vuser The actions that a Vuser performs during the scenario are described in a Vuser script. When you run a

scenario, each Vuser executes a Vuser script. The Vuser scripts include functions that measure and record the performance of the server during the scenario.

A Practitioner's Guide to Software Testing

Page 190 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Transactions

To measure the performance of the server, you define transactions. A transaction represents an action or a set of actions that you are interested in measuring. You define transactions within your Vuser script by enclosing the appropriate sections of the script with start and end transaction statements. For example, you can define a transaction that measures the time it takes for the server to process a request to view the balance of an account and for the information to be displayed at the ATM.

Rendezvous points

You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

Controller

You use the LoadRunner Controller to manage and maintain your scenarios. Using the Controller, you control all the Vusers in a scenario from a single workstation.

Hosts

When you execute a scenario, the LoadRunner Controller distributes each Vuser in the scenario to a host. The host is the machine that executes the Vuser script, enabling the Vuser to emulate the actions of a human user.

Performance analysis

Vuser scripts include functions that measure and record system performance during load-testing sessions. During a scenario run, you can monitor the network and server resources. Following a scenario run, you can view performance analysis data in reports and graphs.

LoadRunner Vuser Technology

On each Windows host, you install a Remote Command Launcher and an Agent.

Remote Command Launcher The Remote Command Launcher enables the Controller to start applications on the host machine.

Agent The Agent enables the Controller and the host to communicate with each other. When you run a scenario, the Controller instructs the Remote Command Launcher to launch the LoadRunner Agent. The Agent receives instructions from the Controller to initialize, run, pause, and abort Vusers. At the same time, the Agent also relays data on the status of the Vusers back to the Controller.

A Practitioner's Guide to Software Testing

Page 191 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Load Runner and Test Director are not included in this current version -0.1 it will be in next version -1.0 SORRY

A Practitioner's Guide to Software Testing

Page 192 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

25 Quality Assurance:

25.1 Difference between Quality Assurance and Quality Control: Quality Assurance Quality Control A planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements.

The process by which product quality is compared with applicable standards; and the action taken when nonconformance is detected.

An activity that establishes and evaluates the processes to produce the products.

An activity which verifies if the product meets pre-defined standards.

Helps establish processes.

Implements the process.

Sets up measurements programs to evaluate processes.

Verifies if specific attribute(s) are in a specific product or service

Identifies weaknesses in processes and improves them.

Identifies defects for the primary purpose of correcting defects.

QA is the responsibility of the entire team.

QC is the responsibility of the tester.

Prevents the introduction of issues or defects

Detects, reports and corrects defects

QA evaluates whether or not quality control is working for the primary purpose of determining whether or not there is a weakness in the process.

QC evaluates if the application is working for the primary purpose of determining if there is a flaw / defect in the functionalities.

QA improves the process that is applied to multiple products that will ever be produced by a process.

QC improves the development of a specific product or service.

QA personnel should not perform quality control unless doing it to validate quality control is working.

QC personnel may perform quality assurance tasks if and when required.

A Practitioner's Guide to Software Testing

Page 193 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

ABOUT 9001:9004

QUALITY SYSTEMS - MODEL FOR QUALITY ASSURANCE IN DESIGN/DEVELOPMENT, PRODUCTION, INSTALLATION & SERVICING

Introduction This International Standard is one of three International Standards dealing with quality system requirements that can be used for external quality assurance purposes. The quality assurance models, set out in the three International Standards listed below, represent three distinct forms of quality system requirements suitable for the purpose of a supplier demonstrating its capability, and for the assessment of the capability of a supplier by external parties. - ISO 9001, Quality systems -- Model for quality assurance in design, development, production, installation

and servicing. For use when conformance to specified requirements is to be assured by the supplier during design,

development, production, installation and servicing. - ISO 9002, Quality systems -- Model for quality assurance in production, installation and servicing. For use when conformance to specified requirements is to be assured by the supplier during production,

installation and servicing. - ISO 9003, Quality systems -- Model for quality assurance in final inspection and test. For use when conformance to specified requirements is to be assured by the supplier solely at final

inspection and test. It is emphasized that the quality system requirements specified in this International Standard, ISO 9002 and ISO 9003 are complementary (not alternative) to the technical (product) specified requirements. They specify requirements which determine what elements quality systems have to encompass but it is not the purpose of these International Standards to enforce uniformity of quality systems. They are generic, independent of specific industry or economic sector. The design and implementation of a quality system has to necessarily be influenced by the varying needs of an organization, its particular objectives, the products and services supplied and the processes and specific practices employed. It is intended that these International Standards will be adopted in their present form, but on occasions they may need to be tailored by adding or deleting certain quality system requirements for specific contractual situations. ISO 9001-1 provides guidance on such tailoring as well as on selection of the appropriate quality assurance model, viz. ISO 9001, ISO 9002 or ISO 9003.

A Practitioner's Guide to Software Testing

Page 194 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Scope This International Standard specifies quality system requirements for use where a supplier’s capability to design and supply conforming product needs to be demonstrated. The requirements specified are aimed primarily at achieving customer satisfaction by preventing nonconformity at all stages from design through to servicing. This International Standard is applicable in situations when ;

a) design is required and the product requirements are stated principally in performance terms or they need to be established; and

b) confidence in product conformance can be attained by adequate demonstration of certain supplier’s capabilities in design, development, production, installation and servicing.

Normative Reference The following Standard contains provisions which, through reference in this text, constitute provisions of this International Standard. At the time of publication, the edition indicated was valid. All Standards are subject to revision, and parties to agreements based on this International Standard are encouraged to investigate the possibility of applying the most recent edition of the standard indicated below. Members of IEC and ISO maintain registers of currently valid International Standards. ISO 8402 : 1993 - Quality management and quality assurance -- Vocabulary. Definitions For the purposes of this International Standard, the definitions given in ISO 8402 and the following definition apply.

Product

Result of activities or processes. Note : A product may include service, hardware, processed material, software and or a combination thereof. Note : A product can be tangible (e.g. assemblies or processed materials) or intangible (e.g. knowledge or concepts) or a combined thereof. Note : For the purposes of this international Standard, the term ‘product’ applies to the intended product offering only and not to unintended ‘by products’ affecting the environment. This differs from the definition given in ISO 8402.

Tender

Offer made by a supplier in response to an invitation to satisfy a contract award to provide product.

A Practitioner's Guide to Software Testing

Page 195 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Contract Agreed requirements between a supplier and customer transmitted by any means. Quality system requirements

Management responsibility

Quality Policy The supplier’s management with executive responsibility shall define and document its policy for quality, including objectives for quality and its commitment to quality. The quality policy shall be relevant to the supplier’s organizational goals and the expectations and needs of its customers. The supplier shall ensure that this policy is understood, implemented and maintained at all levels in the organization.

Organization

Responsibility and authority The responsibility, authority and the interrelation of personnel who manage, perform and verify work affecting quality shall be defined and documented particularly for personnel who need the organizational freedom and authority to :

a) Initiate action to prevent the occurrence of any nonconformities relating to the product, process and quality system; b) Identify and record any problems relating to the product, process and quality system; c) Initiate, recommend or provide solutions through designated channels; d) verify the implementation of solutions;

e) control further processing, delivery or installation of nonconforming product until the deficiency or unsatisfactory condition has been corrected.

Resources

The supplier shall identify resource requirements and provide adequate resources, including the assignment of trained personnel (see 4.18) for management, performance of work and verification activities including internal quality audits.

Management representative The supplier’s management with executive responsibility shall appoint a member of the supplier’s own management who, irrespective of other responsibilities, shall have defined authority for

a) ensuring that a quality system is established, implemented and maintained in accordance with this International Standard; and

A Practitioner's Guide to Software Testing

Page 196 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

b) reporting on the performance of the quality system to the supplier’s management for review and as a basis for improvement of the quality system.

Note : The responsibility of a management representative may also include liaison with external parties on matters relating to the supplier’s quality system.

Management review The supplier’s management with executive responsibility shall review the quality system at defined intervals sufficient to ensure its continuing suitability and effectiveness in satisfying the requirements of this International Standard and the supplier’s stated quality policy and objectives (see 4.1.1). Record of such reviews shall be maintained (see 4.1.6). Quality system

General The supplier shall establish, document and maintain a quality system as a means of ensuring that product conforms to specified requirements. The supplier shall prepare a quality manual covering the requirements of this International Standard. The quality manual shall include or make reference to the quality system procedures and outline the structure of the documentation used in the quality system. Note : Guidance on quality manuals is given in ISO 10013.

Quality system procedures The supplier shall

a) prepare documented procedures consistent with the requirements of this International Standard and the suppliers stated quality policy and b) effectively implement the quality system and its documented procedures.

For the purposes of this International Standard, the range and detail of the procedures that form part of the quality system shall be dependent upon the complexity of the work, the methods used, and the skills and training needed by personnel involved in carrying out the activity. Note : Documented procedures may make reference to work instructions that define how an activity is performed.

Quality planning The supplier shall define and document how the requirements for quality will be met. Quality planning shall be consistent with all other requirements of a supplier’s quality system and shall be documented in a format to suit the supplier’s method of operation. The supplier shall give consideration to the following activities, as appropriate, in meeting the specified requirements for products, project or contracts :

A Practitioner's Guide to Software Testing

Page 197 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

a) the preparation of quality plans b) the identification and acquisition of any controls, processes, equipment (including inspection and test equipment), fixtures, resources and skills that may be needed to achieve the required quality; c) ensuring the compatibility of the design, the production process, installation, inspection and test procedures and the applicable documentation; d) the updating, as necessary, of quality control, inspection and testing techniques, including the development of new instrumentation; e) the identification of any measurement requirement involving capability that exceeds the known state of the art, in sufficient time for the needed capability to be developed; f) the identification of suitable verification at appropriate stages in the realization of product; g) the clarification of standards of acceptability for all features and requirements, including those which contain a subjective element; h) the identification and preparation of quality records (see 4.16).

Note : The quality plans referred to (see 4.2.3a) may be in the form of a reference to the appropriate documented procedures that form an integral part of the supplier’s quality system.

Contract review

General The supplier shall establish and maintain documented procedures for contract review and for the coordination of these activities.

Review Before submission of a tender, or the acceptance of a contract or order (statement of requirement), the tender, contract shall be reviewed by the supplier to ensure that :

a) the requirements are adequately defined and documented; where no written statement of requirement is available for an order received by verbal means, the supplier shall ensure that the order requirements are agreed before their acceptance; b) any differences between the contract or order requirements and those in the tender are resolved; c) the supplier has the capability to meet contract or order requirements.

A Practitioner's Guide to Software Testing

Page 198 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Amendment to a contract The supplier shall identify how an amendment to a contract is made and correctly transferred to the functions concerned within the supplier’s organization.

Records

Records of such contract reviews shall be maintained (see 4.16) Note : Channels for communication and interfaces with the customer’s organization in these contract matters should be established.

Design control

General The supplier shall establish and maintain documented procedures to control and verify the design of the product in order to ensure that the specified requirements are met.

Design and development planning The supplier shall prepare plans for each design and development activity. The plans shall describe or reference these activities, and define responsibility for their implementation. The design and development activities shall be assigned to qualified personnel equipped with adequate resources. The plans shall be updated as the design evolves.

Organizational and technical interfaces Organizational and technical interfaces between different groups which input into the design process shall be identified and the necessary information documented, transmitted and regularly reviewed.

Design input Design input requirements relating to the product, including applicable statutory and regulatory requirements shall be identified, documented and their selection reviewed by the supplier for adequacy. Incomplete, ambiguous or conflicting requirements shall be resolved with those responsible for drawing up these requirements. Design input shall take into consideration the results of any contract review activities.

Design output Design output shall be documented and expressed in terms that can be verified and validated against design input requirements. Design output shall : a) meet the design input requirements; b) contain or make references to acceptance criteria;

A Practitioner's Guide to Software Testing

Page 199 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

c) identify those characteristics of the design that are crucial to the safe and proper functioning of the product (e.g. operating, storage, handling, maintenance and disposal requirements). Design output documents shall be reviewed before release.

Design review At appropriate stages of design, formal documented reviews of the design results shall be planned and conducted. Participants at each design review shall include representatives of all functions concerned with the design stage being reviewed, as well as other specialist personnel, as required. Records of such reviews shall be maintained

Design verification At appropriate stages of design, design verification shall be performed to ensure that the design stage output meets the design stage input requirement. The design verification measures shall be recorded Note : In addition to conducting design reviews design verification may include activities such as - performing alternative calculations - comparing the new design with a similar proven design, if available. - undertaking tests and demonstrations; and - reviewing the design stage documents before release.

Design validation Design validation shall be performed to ensure that product conforms to defined user needs and requirements.

Note : Design validation follows successful design verification (see 25.5.4.7)

Note : Validation is normally performed under defined operating conditions.

Note : Validation is normally performed on the final product, but may be necessary in earlier stages prior to product completion.

Note : Multiple validations may be performed if there are different intended uses.

Design changes All design changes and modifications shall be identified, documented, reviewed and approved by authorized personnel before their implementation.

A Practitioner's Guide to Software Testing

Page 200 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Document and data control

General The supplier shall establish and maintain documented procedures to control all documents and data that relate to the requirements of this International Standard including, to the extent applicable, documents of external origin such as standards and customer drawings. Note : Documents and data can be in the form of any type of media, such as hard copy or electronic media.

Document and data approval and issue The documents and data shall be reviewed and approved for adequacy by authorized personnel prior to issue. A master list or equivalent document control procedure identifying the current revision status of documents shall be established and be readily available to preclude the use of invalid and/or obsolete documents. The control shall ensure that :

a) the pertinent issues of appropriate documents are available at all locations where operations essential to the effective functioning of the quality system are performed; b) invalid and/or obsolete documents are promptly removed from all points of issue or use, or otherwise assured against unintended use. c) any obsolete documents retained for legal and/or knowledge preservation purposes are suitably identified.

Document and data changes

Changes to documents shall be reviewed and approved by the same functions/organizations that performed the original review and approval, unless specifically designated otherwise. The designated organizations shall have access to pertinent background information upon which to base their review and approval. Where practicable, the nature of the change shall be identified in the document or the appropriate attachments. Purchasing

General The supplier shall ensure that purchased product conforms to specified requirements (see 3.1) conforms to specified requirements.

Evaluation of sub-contractors The supplier shall

a) evaluate and select sub-contractors on the basis of their ability to meet sub-contract requirements, including the quality system and any specific quality assurance requirements.

A Practitioner's Guide to Software Testing

Page 201 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

b) define the type and extent of control exercised by the supplier over sub-contractors. This shall be dependent upon the type of product, the impact of sub-contracted product on the quality of final product and, where applicable, on the quality audit reports and/or quality records of previously demonstrated capability and performance of subcontractors; c) establish and maintain quality records of acceptable subcontractors (see 4.16)

The supplier shall ensure that quality system controls are effective.

Purchasing data Purchasing documents shall contain data clearly describing the product ordered, including, where applicable :

a) the type, class, style, grade or other precise identification;

b) the title or other positive identification, and applicable issue of specifications, drawings, process requirements, inspection instructions and other relevant technical data, including requirements for approval or qualification of product, procedures, process equipment and personnel;

c) the title, number and issue of the quality system International Standard to be applied

The supplier shall review and approve purchasing documents for adequacy of specified requirements prior to release.

Verification of purchased product

Supplier verification at subcontractor’s premises Where the supplier proposes to verify purchased product at the subcontractor’s premises, the supplier shall specify verification arrangements and the method of product release in the purchasing documents.

Customer verification of subcontracted product Where specified in the contract, the supplier’s customer or the customer’s representative shall be afforded the right to verify at the subcontractor’s premises and the supplier’s premises that subcontracted product conforms to specified requirements. Such verification shall not be used by the supplier as evidence of effective control for quality by the subcontractor. Verification by the customer shall not absolve the supplier of the responsibility to provide acceptable product, nor shall it preclude subsequent rejection by the customer. Control of customer-supplied product The supplier shall establish and maintain documented procedures for the control of verification, storage and maintenance of customer-supplied product provided for incorporation into the supplies or for related activities. Any such product that is lost, damaged or is otherwise unsuitable for use shall be recorded and reported to the purchaser Verification by the supplier does not absolve the purchaser of the responsibility to provide acceptable product.

A Practitioner's Guide to Software Testing

Page 202 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Product identification and traceability Where appropriate, the supplier shall establish and maintain documented procedures for identifying the product by suitable means from receipt and during all stages of production, delivery and installation. Where, and to the extent that, traceability is a specified requirement, the supplier shall establish and maintain documented procedures for unique identification of individual product or batches. This shall be recorded (see Process control The supplier shall identify and plan the production, installation and servicing processes which directly affect quality and shall ensure that these processes are carried out under controlled conditions. Controlled conditions shall include the following :

a) documented procedures defining the manner of production, installation and servicing, where the absence of such procedures could adversely affect quality;

b) use of suitable production, installation and servicing equipment, and a suitable working environment; c) compliance with reference standards/codes and quality plans and/or documented procedures; d) monitoring and control of suitable process and product characteristics e) the approval of process and equipment, as appropriate;

f) criteria for workmanship which shall be stipulated, in the clearest practical manner (e.g. written

standards, representative samples or illustrations. g) suitable maintenance of equipment to ensure continuing process capability.

Where the results of processes cannot be fully verified by subsequent inspection and testing of the product and where, for example, processing deficiencies may become apparent only after the product is in use, the processes shall be carried out by qualified operators and/or shall require continuous monitoring and control of process parameters to ensure that the specified requirements are met. The requirements for any qualification of process operations, including associated equipment and personnel shall be specified. Note : Such processes requiring pre-qualification of their process capability are frequently referred to as special processes. Records shall be maintained for qualified processes, equipment and personnel, as appropriate

A Practitioner's Guide to Software Testing

Page 203 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Inspection and testing

General The supplier shall establish and maintain documented procedures for inspection and testing activities in order to verify that the specified requirements for the product are met. The required inspection and testing, and the records to be established, shall be detailed in the quality plan or documented procedures.

Receiving inspection and testing

The supplier shall ensure that incoming product is not used or processed (except in the circumstances described in until it has been inspected or otherwise verified as conforming to specified requirements. Verification of conformance to the specified requirements shall be in accordance with the quality plan and/or documented procedures.

In determining the amount and nature of receiving inspection, consideration should be given to the amount of control exercised at the subcontractor’s premises and the ecorded evidence of conformance provided.

Where incoming product is released for urgent production purposes prior to verification, it shall be positively identified and recorded in order to permit immediate recall and replacement in the event of nonconformity to specified requirements.

In-process inspection and testing The supplier shall

a) inspect and test the product as required by the quality plan and/or documented procedures; b) hold product until the required inspection and tests have been completed or necessary reports have

been received and verified, except when product is released under positiverecall procedures (). Release under positives recall procedures shall not preclude the activities outlined in

Final inspection and testing

The supplier shall carry out all final inspection and testing in accordance with the quality plan and or documented procedures to complete the evidence of conformance of the finished product to the specified requirements. The quality plan and/or documented procedures for final inspection and testing shall require that all specified inspection and tests, including those specified either on receipt of product or in-process have been carried out and that the results meet specified requirements. No product shall be dispatched until all the activities specified in the quality plan or documented procedures have been satisfactorily completed and the associated data and documentation is available and authorized.

A Practitioner's Guide to Software Testing

Page 204 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Inspection and test records The supplier shall establish and maintain records which provide evidence that the product has been inspected and/or tested. These records shall show clearly whether the product has passed or failed the inspections and/or tests according to defined acceptance criteria. Where the product fails to pass any inspection and/or test, the procedures for control of nonconforming product apply Records shall identify the inspection authority responsible for the release of product (see 4.16) Control of inspection, measuring and test equipment

General The supplier shall establish and maintain documented procedures to control, calibrate and maintain inspection, measuring and test equipment, (including test software) used by the supplier to demonstrate the conformance of product to the specified requirements. Inspection, measuring and test equipment shall be used in a manner which ensures that the measurement uncertainty is known and is consistent with the required measurement capability. Where test software or comparative references such as test hardware are used as suitable forms of inspection, they shall be checked to prove that they are capable of verifying the acceptability of product, prior to release for use during production, installation or servicing, and shall be rechecked at prescribed intervals. The supplier shall establish the extent and frequency of such checks and shall maintain records as evidence of control Where the availability of technical data pertaining to the inspection, measuring and test equipment is a specified requirement, such data shall be made available, when required by the customer or customer’s representative, for verification that the inspection, measuring and test equipment is functionally adequate. Note : For the purposes of this International Standard, the term ‘measuring equipment’ includes measurement devices.

Control Procedure The supplier shall

a) determine the measurements to be made and the accuracy required and select the appropriate inspection, measuring and test equipment that is capable of the necessary accuracy and precision; b) identify all inspection, measuring and test equipment that can effect product quality and calibrate and adjust them at prescribed intervals, or prior to use, against certified equipment having a known valid relationship to internationally or nationally recognized standards. Where no such standards exist, the basis used for calibration shall be documented. c) define the process employed for the calibration of inspection, measuring and test equipment, including details of equipment type, unique identification, location, frequency of checks, check method, acceptance criteria and the action to be taken when results are unsatisfactory; d) identify inspection, measuring and test equipment with a suitable indicator or approved identification record to show the calibration status; e) maintain calibration records for inspection measuring and test equipment (see 25.5.16);

A Practitioner's Guide to Software Testing

Page 205 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

f) assess and document the validity of previous inspection and test results when inspection, measuring or test equipment is found to be out of calibration; g) ensure that the environmental conditions are suitable for the calibrations, inspections, measurements and test being carried out; h) ensure that the handling, preservation and storage of inspection, measuring and test equipment is such that the accuracy and fitness for use are maintained; i) safeguard inspection, measuring and test facilities, including both test hardware and test software, from adjustments which would invalidate the calibration setting.

Note : The meteorological confirmation system for measuring equipment given in ISO 10012 may be used for

guidance. Inspection and test status The inspection and test status of product shall be identified by suitable means, which indicate the conformance or nonconformance of product with regard to inspection and tests performed. The identification of inspection and test status shall be maintained, as defined in the quality plan and/or documented procedures, throughout production, installation and servicing of the product to ensure that only product that has passed the required inspection and tests (or released under an authorized concession (see 4.13.2)) is dispatched, used or installed. Control of nonconforming product

General The supplier shall establish and maintain documented procedures to ensure that product that does not conform to specified requirements is prevented from unintended use or installation. This control shall provide for identification, documentation, evaluation, segregation (when practical), disposition of nonconforming product, and for notification to the functions concerned.

Review and disposition of nonconforming product The responsibility for review and authority for the disposition of nonconforming product shall be defined. Nonconforming product shall be reviewed in accordance with documented procedures. It may be

a) reworked to meet the specified requirements, b) accepted with or without repair by concession, c) regarded for alternative applications, or d) rejected or scrapped.

When required by the contract, the proposed use or repair of product (see25.5.13.2b) which does not conform to specified requirements shall be reported for concession to the customer or customer’s representative. The

A Practitioner's Guide to Software Testing

Page 206 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

description of nonconformity that has been accepted, and of repairs, shall be recorded to denote the actual condition Repaired and/or reworked product shall be re-inspected in accordance with the quality plan and or documented procedures. Corrective and Preventive action

General The supplier shall establish and maintain documented procedures for implementing corrective and preventive action. Any corrective or preventive action taken to eliminate the causes of actual or potential nonconformities shall be to a degree appropriate to the magnitude of problems and commensurate with the risks encountered. The supplier shall implement and record any changes to the documented procedures resulting from corrective and preventive action.

Corrective action The procedures for corrective action shall include :

a) the effective handling of customer complaints and reports of product nonconformities; b) investigation of the cause of nonconformities relating to product, process and quality system, and recording the results of the investigation (see 4.16); c) determination of the corrective action needed to eliminate the cause of nonconformities; d) application of controls to ensure that corrective action is taken and that is effective;

Preventive action The procedures for preventive action shall include :

a) the use of appropriate sources of information such as processes and work operations which affect product quality, concessions, audit results, quality records, service reports and customer complaints to detect, analyze and eliminate potential causes of nonconformities; b) determination of steps needed to deal with any problems requiring preventive action; c) initiation of preventive action and application of controls to ensure that it is effective; d) ensuring that relevant information on actions taken is submitted for management review (see 4.1.3).

A Practitioner's Guide to Software Testing

Page 207 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Handling, storage, packaging, preservation and delivery

General The supplier shall establish, document and maintain documented procedures for handling, storage, packaging and delivery of product.

Handling

The supplier shall provide methods and means of handling that prevent damage or deterioration.

Storage The supplier shall use designate storage areas or stock rooms to prevent damage or deterioration of product, pending use or delivery. Appropriate methods for authorizing receipt and dispatch from such areas shall be stipulated. In order to detect deterioration, the condition of product in stock shall be assessed at appropriate intervals.

Packaging The supplier shall control packing, packaging, and marking processes (including materials used) to the extent necessary to ensure conformance to specified requirements.

Preservation The supplier shall apply appropriate methods for preservation and segregation of product when the product is under the supplier’s control.

Delivery The supplier shall arrange for the protection of the quality of product after final inspection and test. Where contractually specified, this protection shall be extended to include delivery to destination. Control of Quality records The supplier shall establish and maintain documented procedures for identification, collection indexing, access, filing, storage, maintenance and disposition of quality records. Quality records shall be maintained to demonstrate conformance to specified requirements and the effective operation of the quality system. Pertinent quality records from the subcontractor shall be an element of these data. All quality records shall be legible and shall be stored and retained in such a way that they are readily retrievable in facilities that provide a suitable environment to prevent damage or deterioration and to prevent loss. Retention times of quality records shall be established and recorded. Where agreed contractually, quality records shall be made available for evaluation by the customer or the customer’s representative for an agreed period. Note : Records may be in the form of any type of media, such as hard copy or electronic media.

A Practitioner's Guide to Software Testing

Page 208 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

Internal quality audits The supplier shall establish and maintain documented procedures for planning and implementing internal quality audits to verify whether quality activities and related results comply with planned arrangements and to determine the effectiveness of the quality system. Internal quality audits shall be scheduled on the basis of the status and importance of the activity to be audited and shall be carried out by personnel independent of those having direct responsibility for the activity being audited. The results of the audits shall be recorded (see 25.5.16) and brought to the attention of the personnel having responsibility in the area audited. The management personnel responsible for the area shall take timely corrective action on the deficiencies found during the audit. Follow-up audit activities shall verify and record the implementation and effectiveness of the corrective action taken ( Note : The results of the internal quality audits form an integral part of the input to management review activities Note : Guidance on quality system audits is given in ISO 10011. Training The supplier shall establish and maintain documented procedures for identifying training needs and provide for the training of all personnel performing activities affecting quality. Personnel performing specific assigned tasks shall be qualified on the basis of appropriate education, training and/or experience, as required. Appropriate records of training shall be maintained (see 4.16). Servicing Where servicing is a specified requirement, the supplier shall establish and maintain documented procedures for performing and verifying that servicing meets the specified requirements. Statistical techniques

Identification of need The supplier shall identify the need for statistical techniques required for establishing, controlling and verifying process capability and product characteristics.

Procedures The supplier shall establish and maintain documented procedures to implement and control the application of the statistical techniques identified in

A Practitioner's Guide to Software Testing

Page 209 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

ISO 9000 BASICS

1. What is ISO 9000 ? The ISO 9000 (Series) is a set of standards which provides a framework for Quality Management Systems. 2. What is ISO ? ISO is the acronym for the International Standards Organisation based in Geneva, Switzerland. The ISO is made up of the national standards bodies of 91 countries. India, too is represented in the ISO by the Bureau of Indian Standards (BIS) which was formerly known as the Indian Standards Institute (ISI). 3. What is the significance of the number 9000 ? There is no particular significance, except that the ISO wanted to give a distinct identity to standards associated with Quality Systems. The ISO has similarly introduced standards numbered from 10000 onwards known as the 10000 series. These standards related to auditing and deal with subjects such as auditing principles, auditor qualifications, etc. 4. How is ISO 9000 different from other standards that we have been using ? You may be normally conversant with products standards, inspection standards and so on. However, the ISO 9000 series does not contain any specifications related to any particular product. Instead it focuses on the systems that you use to make your products. It is a set of standards for quality systems and is therefore different from other standards that you normally use. 5. When, where and by whom was the ISO 9000 formulated ? The standard was formulated in 1987 and like all other ISO standards was formulated by a committee of the ISO. The 9000 series of standards was formulated by a committee known as the TC/176 (Technical Committee 176). This committee was chaired by a representative from the Canadian Standards Association. 6. What is the basic objective of the ISO 9000 series ? As mentioned in the scope of the ISO 9001, “The requirements specified in this international standard are aimed primarily at preventing non-conformity at all stages from design through to servicing”. 7. How does the standard help in ‘preventing nonconformity’ ? The underlying philosophy is that if all critical activities are planned to be performed in a specified manner, quality and consistency per-se get ‘built into’ the product. There are numerous factors which effect product quality at various stages. It starts from the Marketing officer’s understanding of the customers requirements, the thoroughness with which the design is made and reviewed and so on, right down to the way the material is dispatched from the stock-yards. The standard provides a model with the help of which :

A Practitioner's Guide to Software Testing

Page 210 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

• You can foresee those activities or loop holes which may cause trouble later on, or • Learn from the mistakes that have already occurred and therefore plan them in such a way that you prevent mistakes or bad quality from occurring, in the first place, or in the future. Once you have identified the customer’s requirements and the e important activities that have a bearing on meeting customer’s requirements you are required to formalise and standardise the right way of performing these activities by documenting them. This is how quality is built-into the product using ISO 9000 as a model. Subsequently, all employees are trained to perform these activities only in the right way that has been prescribed. This ensures consistency from person to person, from one period to another. The standard procedures are to be followed till such time that you can think of a better way, or if customers demand change. In such cases you will need to redefine your standard procedures. 8. Why is consistency so important ? If you were a regular passenger of a train, which train would you be more comfortable with - one that invariably arrives 15 minutes late or another train which sometimes arrives 30 minutes before time and sometimes 15 minutes late. For a regular traveler the first option would definitely be more preferable. Similarly, customers plan for the use of your products taking into account the excellence or mediocrity of your product. If they expect mediocre products, they take sufficient precautions. Conversely, if they expect excellent products they remove all checks within their system. They would obviously be unhappy if you kept them guessing, by sending them excellent products on one day, and below par products on another day. They would prefer to deal with a consistently average quality supplier, than with an inconsistently excellent one. 9. Is ISO 9000 a new fad that has been picked up ? Why has it become a catch word all of a sudden ? The attention being given to ISO 9000 seems sudden for two reasons. One, the standards itself was created only as recently as 1987, as a result of putting together the components of similar standards from various countries, especially the British Standards BS 5750. Two, the European community has got together as one trading community around Christmas time in 1992. To facilitate free trade between the twelve member nations in the new unified market, they have identified 78 technical directives or standards known as EURONORMS which will be commonly applicable to all of them. These EURONORMS deal with areas such as safety, health, pollution, transmission frequencies for television, customs duties and the common currency - the ECU. By spring 1990, 58 directives had already been adopted by them of which the ISO 9000 standards is the one which has created the biggest ripple effect within and outside Europe. It implies that within or outside Europe will have to be certified to the ISO 9000 if they wish to have free trading access to distribute a channel in all the nations in the European community. Whether ISO 9000 is 100% mandatory or not, is ambiguous to-date. Different agencies have their own differing versions. Hence, the comparative rush to get certified to this standard by those organisations that foresee the possibility of doing business with the unified market of the EEC.

A Practitioner's Guide to Software Testing

Page 211 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

10. Will ISO 9000 certification guarantee product acceptance in Europe ? No. ISO 9000 is only a passport to Europe, not a visa. A passport only makes you eligible for entry, but does not give you the right to enter. In this analogy, a visa would be your ability to specifically satisfy a customer’s requirements in terms of specifications, cost, delivery and the like. Certification only helps in increasing your customers’ confidence in your ability to meet their requirements. It does not necessarily mean that customers will automatically accept your product. However, experience has shown that by virtue of having good systems, ISO 9000 companies are usually granted the freedom to self-certify their products. In such cases, customers do not insist on inspection of goods purchased from you, and instead carry out random product audits only. 11. Will it beam mandatory for exports to other countries and for trade within India too ? Considering the fact that the standard has been accepted by about 60 countries world-wide, it is only a matter of time that other countries demand compliance to this standard. For similar reasons, it is possible that in India too, discerning customers may start demanding certification to ISO 9000 from their suppliers. The logic is simple. Given a choice between two suppliers who have similar quality levels, a customer would definitely have more confidence on the one whose systems have been verified by an independent and third party certification agency.

A Practitioner's Guide to Software Testing

Page 212 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

THE NEED FOR A QUALITY SYSTEMS STANDARD

1. What is a Quality System ? The ISO 8402 : Quality Vocabulary, defines a Quality System as “The Organisational structure, responsibilities, procedures, processes and resources for implementing Quality Management”. In simpler terms, a Quality System is nothing but the matrix which binds together or inter-relates the important individual components of an organisation i.e. men, material, machines, and methods. 2. Our company has been in existence for a very long time. Do you think we could have existed for so long without any systems ? You are right. The very fact that your company has existed for such a long period of time is proof that systems do exist. 3. In that case why should we do anything about systems at this stage? The existence of a company and thereby its systems are not proof enough that, in its entirety, all systems in the company are working efficiently and effectively. (Note : Efficient means doing things right, Effective means doing the right thing) Since, a system is like a chain which links the various important components of your company, the system is only as strong as the weakest link. 4. But how do we find out whether our systems need improvement? Are the following statements commonplace in your work life : • Why didn’t you specify ? • Who approved this ? • Why didn’t I get a copy ? • Who authorised the change ? • Where is the documented evidence ? • That is not my responsibility. • Why did we buy from those people ? • Who inspected that ? • Who allowed this new person to work on this without being trained ? • I didn’t have an up-to-date specification • Don’t you know this drawing has been revised ? • But ... we have always done it in this way. If your answer is, “Yes, we do hear such statements in our organisation” then these symptoms are a sure sign that your systems need improvement.

A Practitioner's Guide to Software Testing

Page 213 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

5. Does ISO 9000 prescribe solutions to remove all the above mentioned symptoms of inefficient systems ? ISO 9000 does not prescribe solutions. It just provides a model with which you can compare the status of your existing systems. You can use it to determine those systems which need improvement, as also systems that are your strengths, and therefore can be left alone. Once you have done that, you have to find out your own solutions for ineffective systems. 6. In short, what does the ISO 9000 demand of an organisation’s system ? It requires that all important activities, which have a bearing on quality, be documented and performed in a consistent manner, by all concerned. In other words, standardising the method of performing all critical activities in an organisation. 7. Standardisng our method of working! In an era of increasing freedom, aren’t we going a step backwards by dampening individual creativity and turning every one into an automation, by telling them what to do, where, when, how and so on ? Standardization is a widely misinterpreted term - on the surface it seems to oppose originality and creativity. But on the contrary, standards prevent a state of chaos that will be natural, if everyone works the way he pleases. As Henry Ford said nearly 60 years ago : “To standardise a method is to choose the best out of many methods and use it. Standardisation means nothing unless it means standardising upwards. What is the best way to do a thing ? It is the sum of all the good ways that we have discovered upto the present. It therefore becomes the standard. To suggest that today’s standard shall be tomorrow’s is erroneous. Such a decree (if issued) cannot stand. We see all around us yesterday’s standards but no one mistakes them for today. Today’s best which superseded yesterday’s will be superseded by tomorrow’s best. This is a fact that theorists often overlook. They assume that a standard is a steel mould by which it is expected to shape and confine all effort for an indefinite time. If that were possible, we should today be using the standards of one hundred years ago... Today’s standardisation instead of being a barricade against improvement is the necessary foundation on which tomorrow’s improvements will be based. If you think of ‘standardisation’ as the best of what you know today, but which is to be improved tomorrow - you get somewhere. But if you think of standards as confining then progress stops”. Therefore, standardising a system will not dampen creativity. On the contrary it can be used as a means for tapping creativity. A documented system couple with training of personnel and regular audits ensures that all people doing a similar job do it in the best method available or known at that point of time. If a new system of working is devised subsequently the documents in the system are revised to introduce the improved system. In fact the Japanese have a saying that “If procedure or a work instruction has not been revised in the past six months, it is a sure sign that it is not being followed”. Contrary to popular belief a documented system can spur creativity in many ways such as :

A Practitioner's Guide to Software Testing

Page 214 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

• brainstorming to decide the best method for doing a particular job, so that it can be put on trail and documented if successful. The participation of the concerned group in deciding a procedure ensures commitment to it once it is documented. • acting as a reference for comparison before putting forth a new idea. 8. Isn’t this standard an unnecessary headache for those not in the export business ? No. It would be very shirt sighted if someone thought so. ISO 9000 sets out the framework for you to establish, document and maintain an effective Quality Management System. A system built on such lines provides assurance to your customers that you are committed to quality and that you have the ability to meet their quality needs consistently. It is an internationally accepted standard and therefore a system built on the guidelines of this standard provides a strong foundation for the sustenance and the improvement of your business, irrespective of whether your goods are exported or not. 9. How difficult is it to comply to the requirements of the standard ? The standard is the result of common sense set down on paper in an organized way. Once you read through it you will realise that the requirements specified in the standard are nothing but good management practice; things which you should have been doing in the first place. Any existing company usually conforms to 70 to 80 % of the requirements of the standard. After all a company would not be able to exist for long without systems. However the systems are usually informal and too man-dependent. Therefore conforming to the requirements of the ISO 9000 would mean identifying those 20 - 30 % of the systems that are missing, and then making the entire system formal and well understood by documentation and training. The difficulty in implementing it depends on the state of your existing systems. If you already have good systems then it is a matter of a few months of effort. Otherwise if your systems are in some state of chaos then it may take a few years to put it in place. In any case the initial effort required is usually high for an average company. But once the systems are in place the regular effort to sustain it is comparatively negligible. It is therefore well worth the initial effort. 10. What are the benefits of complying to ISO 9000 ? Some of the important benefits are : • Highlights causes of chronic system related problems : Implementing the requirements of ISO 9000 is a process of re-discovering your organisation in a way since it essentially tries to find the answer to the question : “Do you know how your company really works ?” Most managers think they do but are quite surprised at the difference between their perception and the reality.

A Practitioner's Guide to Software Testing

Page 215 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

When they try to formalize a system, they realize that a number of things that should be happening are not happening and vice-versa. This process also brings into the fore the differences in perceptions between different people which is again a major cause of inconsistency. These are the problems that are highlighted and sorted out in the process of implementing ISO 9000. • Helps reduce cost of poor quality : Using ISO 9000 brings real economics in its wake such as : • economics in terms of preventive measures, because systems are planned and all possible loopholes are

attempted to be plugged even before start-up, • economics during normal operations because all critical processes are controlled from start to finish as per

well laid down systems, • economics in resources and in time spent on replanning or modifying design. This is so because the

organisation learns from its mistakes (as well as success) and on each such occasion formally changes the right way to do the job, redocumenting the system and retraining the people working in it.

• Builds customer confidence : Even 300 % inspection is not fool proof. Therefore customers gain in confidence if your systems are designed and operated in a manner that poor quality is difficult to produce. • Passport to Europe : As mentioned earlier companies complying to ISO 9000 would have freer access to trade with European companies after 1992. • Position of strength during product litigation : In case customers claim damages from you for supplying defective products by virtue of a system based on ISO 9000 you will have a complete record of every stage of production; invaluable for product or process improvement and in relation to any product liability claim. 11. Product Liability Claim ? How is that concerned with ISO 9000 ? In India we have never been too concerned about product liability claims. But this is going to change in the near future at least for those who will be doing business in Europe and other such discerning markets. One of the most stringent EURONORMS is the ‘Product Liability Directive’. In simple terms what it specifies is that if a product fails and causes damage or loss to the user, the manufacturer is ‘Guilty until he proves himself innocent’. The onus of proof is on the manufacturer and not on the user who may be guilty of using the product. The ISO 9000 specifies the requirement for a ‘Product identification and Traceability (wherever applicable and contractually required) System. Only with the help of this system you will be able to prove with the help of relevant records for process control, inspection testing and so on that the failure of the product was not due to lapses on your part.

A Practitioner's Guide to Software Testing

Page 216 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

It is feared that companies which do not have systems as per the ISO 9000 may either have to pay exorbitant product liability insurance charges or be denied insurance altogether. 12. How is the Quality defined as per the standard? Will it make existing product specifications obsolete ? The definition of quality is given in a related standard, ISO 8402 - Quality Vocabulary. Quality is defined as “the totality of features and characteristics that bear upon its ability to satisfy stated or implied needs”. As can be seen from the above definition, Quality parameters and specifications are left to the organisation to determine, keeping in mind the customers requirements. Most organisations produce a product or service based upon customer’s requirements which are often incorporated in terms of specifications. However, technical specifications alone do not ensure that a customers requirements will be consistently met, if there happened to be any deficiencies in the systems to design and produce that product. The ISO 9000 is a standard for a system and will not have any impact on the specifications of a product. It will be complementary to product specifications and standards and will only provide an assurance to the customers that the manufacturer has the capability to meet his specifications consistently. 13. We do satisfy the specifications laid down by the customer and to substantiate it we also provide him with our inspection and tests certificate. So why should the customer bother about my organisation’s systems as long as he is getting what he wants ? The ability of an organisation to consistently produce what it intents to produce is definitely the hallmark of an organisation committed to the satisfaction of its customers. But then how do you provide him this assurance ? The traditional method of assuring the customer and ourselves that bad quality does not leave your premises is to carry out inspections at various stages. But it’s been proved that 100, 200 or even 300 % inspection is not fool proof. Process control too by itself is also not enough unless supported by other systems. All your process may be capable but due to a system deficiency such as having an old copy of a specification or a drawing you may be sending bad quality products to your customers, unintentionally and unknowingly. Your control charts may also be misleading, if your measuring equipment is not covered by a calibration system, or if your operators are not covered by a proper training system. If your company design process is inconsistent, you will keep producing substandard quality due to poor product designs, inspite of having the best machines in your possession. These reasons, owing to which a customer may suffer at your hands may sound silly, but in reality they do exist in most organisations. due to lack of good systems. If the output satisfies the customer only once in a while, then such a firm obviously does not have systems to assure consistency in quality. Therefore, discerning customers today are as bothered about your systems for producing quality as they are about the final result. They believe that “... if you look after your systems and processes the product will look after itself”.

A Practitioner's Guide to Software Testing

Page 217 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

14. Since no quality levels are specified in the ISO 9000, does it mean that a company producing 75% defect free products as well as a company producing 95% defect free products, can both be certified to the standard ? Yes, both companies can be certified. Obviously your next question is “... then what’s the big deal about this standard ? The answer is that zero defect is an ideal goal which every organisation should strive for. There are two prerequisites to reach that stage : • A system for continuos improvement • A system for consolidating all improvements that have been made. The ISO 9000 addresses both the above issues with a larger emphasis on the latter. The standard takes the view that assuming that a certain percentage of defectives is inherent in your system due to process or input deficiencies what becomes important is your system of identifying and disposing non-conforming product in a manner that they do not reach the customer by mistake. Your system should assure a customer that with the given level of defectives enough checks have been installed in the system at various stages which will ensure that the defective produce do not reach him. The ISO 9000 also requires you to have systems for corrective action whereby you are requires to conduct periodic reviews of your quality performance and take steps to prevent recurrence of similar problems. It is therefore implied that companies which have formal systems for review and for taking subsequent corrective and preventive action, will undoubtedly improve in the long run. Due to this interpretation you may argue that ISO 9000 cannot be viewed as an ultimate symbol of quality. At the same time you cannot dismiss it off since the company with systems conforming to the standards will not only be able to maintain its quality level but also improve upon it. More over there is no denying the fact that an organisation which has fewer defects in comparison to a competitor will have lower costs more reliable products more customers and greater market shares. 15. Will certification to the standard require a totally new way of working ? The answer is both Yes and NO. Yes, because complying to ISO 9000 calls for greater emphasis on training, on document preparation and control, on timely audits and reviews of the system, as well as greater involvement of the process owners in ensuring shop floor discipline and efficiency. Such a precise disciplined and systematic way of working while adopting ISO 9000 will definitely mean a departure from the existing way of working, if that is not the way you presently work. No, because the requirements of ISO 9000 are nothing but commonsense put down on paper in an organised manner. The requirements specified in the standard are nothing but good management practice; things which you should have been doing in the first place. So, in that respect you will not require a totally new way of working.

A Practitioner's Guide to Software Testing

Page 218 of 218 Copyright © 02004 by Mahesh D. I. All Rights Reserved

NOT YET COMPLETED FULLY THIS VERSION-0.1 IS A TRIAL VERSION YOU CAN ACCEPT FULLY COMPLETED STUFFS ON SOFTWARE TESTING [A Practitioner's Guide to Software Testing] IN VERSION -1.0 WHICH WILL BE ON 30TH MARCH 2005 PLEASE MAIL YOUR FEEDBACK ABOUT THIS VERSION TO [email protected] or [email protected] THANKS AND REGARDS MAHESH D.I